Truth Tables

Identity
A A
0 0
1 1


Negation
¬A A
0 1
1 0


AND - conjunction
A B A ^ B
0 0 0
0 1 0
1 0 0
1 1 1


NAND
A B A ↑ B
0 0 1
0 1 1
1 0 1
1 1 0


OR - disjunction
A B A V B
0 0 0
0 1 1
1 0 1
1 1 1


NOR
A B A ↓ B
0 0 1
0 1 0
1 0 0
1 1 0


EQUALS
A B A == B
0 0 1
0 1 0
1 0 0
1 1 1


Implication - conditional if - then
A B A → B
0 0 1
0 1 1
1 0 0
1 1 1


N Implication - conditional then - if
A B A → B
0 0 1
0 1 0
1 0 1
1 1 1


XOR - exclusive or
A B A ⊕ B
0 0 0
0 1 1
1 0 1
1 1 0


NXOR - exclusive nor - biconditional - if and only if
A B A B
0 0 1
0 1 0
1 0 0
1 1 1

RabbitMQ - AMQP

                    

Scenario: AMQP [Advanced messaging queueing protocol] on Rabbit.

Solution:

Below are the components of RabbitMQ. Ex could be a Post Office
  1. It uses RCP 2 way to client connation with broker.
  2. Rabbit MQ commands which is combination of Class and method [ConnectionStart].
  3. Frame has all data to communicate. Below are types of frames.
    1. Method
    2. Content
    3. Body
    4. Hierarchical
  4. Each frame constituters of bytes consisting of 
    1. Frame Type [Method]
    2. Channel
    3. Size [of message]
    4. Frame Specific content
      1. Class
      2. Method
    5. Frame End

Rabbit MQ - Overview

                   

Scenario: Rabbit MQ as Message Broker

Solution:

Per RabbitMQ:

RabbitMQ is an open-source message-broker software that originally implemented the Advanced Message Queuing Protocol and has since been extended with a plug-in architecture to support Streaming Text Oriented Messaging Protocol, MQ Telemetry Transport, and other protocols.

Below are the components of RabbitMQ. Ex could be a Post Office

  1. Producer - Which produces a message [Dropping a letter].
    1. Producer sends message to one or more exchanges.
  2. Message Broker [Post Office - They know how to deliver the posted message to the receiver]
    1. Exchange [Postal Department]
      1. There could be multiple exchanges.
      2. Exchange push messages to one or more queue.
      3. Types of exchanges
        1. Direct
        2. Topic
        3. Header
        4. Fanout - Publish same message to multiple queues (each message has its own reference, exchange stores it only once).
    2. Queues [Letter box, Person check time to time or one time of day etc.]
      1. They are tied to exchange through binding
  3. Consumer [Letter receiver]
    1. Consumer listening to messages pushed to no, one or more queues.
  4. Connections
    1. Producer & Consumer opens one connection with Message Broker over tcp.
  5. Channel
    1. Connection can have multiple Channels. So one connection but messages pushed over channels (threads).

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    //Producer
            static void Main(string[] args)
            {
                var factory = new ConnectionFactory() { HostName = "localhost" };
                var connection = factory.CreateConnection();
                var channel = connection.CreateModel();
    
                // queue: The name of the queue. Pass an empty string to make the server generate a name.
                // durable: Should this queue will survive a broker restart?
                // exclusive: Should this queue use be limited to its declaring connection? Such a queue will be deleted when its declaring connection closes.
                // autoDelete:Should this queue be auto-deleted when its last consumer (if any) unsubscribes?
                channel.QueueDeclare(queue: "messages", durable: false, exclusive: false, autoDelete: false,
                    arguments: null);
    
                var data = Encoding.UTF8.GetBytes("Hi There");
    
                //Default Exchange
                channel.BasicPublish("", "messages", null, data);
            }
    
            //Consumer
            static void Main(string[] args)
            {
                var factory = new ConnectionFactory() { HostName = "localhost" };
                var connection = factory.CreateConnection();
                var channel = connection.CreateModel();
    
                // queue: The name of the queue. Pass an empty string to make the server generate a name.
                // durable: Should this queue will survive a broker restart?
                // exclusive: Should this queue use be limited to its declaring connection? Such a queue will be deleted when its declaring connection closes.
                // autoDelete:Should this queue be auto-deleted when its last consumer (if any) unsubscribes?
                channel.QueueDeclare(queue: "messages", durable: false, exclusive: false, autoDelete: false,
                    arguments: null);
    
                var consumer = new EventingBasicConsumer(channel);
    
                consumer.Received += (obj, evn) =>
                {
                    var message = Encoding.UTF8.GetString(evn.Body.ToArray());
                    Console.WriteLine($"Received message: {message}");
                };
    
                channel.BasicConsume(queue: "messages", autoAck: true, consumer: consumer);
    
                Console.ReadLine();
            }

     
Competing Consumers:
  1. The competing consumers or work queue pattern is a way to spread the consumption of messages across a number of different consumers. This is used to process message in scale-able and reliable manner. 
  2. We can add multiple consumers to scale the system as if the consumers is slow to process message the exchange will get backlogged with memory issues. 
  3. It uses round robin method, so broker will try C1 then C2 etc. The issue is C1 might already have a message in flight and C2 might be idle. To over come unless the C1 acknowledges the processing of message new message is not given to it.

    1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    static void Main(string[] args)
            {
                var factory = new ConnectionFactory() { HostName = "localhost" };
                var connection = factory.CreateConnection();
                var channel = connection.CreateModel();
    
                // queue: The name of the queue. Pass an empty string to make the server generate a name.
                // durable: Should this queue will survive a broker restart?
                // exclusive: Should this queue use be limited to its declaring connection? Such a queue will be deleted when its declaring connection closes.
                // autoDelete:Should this queue be auto-deleted when its last consumer (if any) unsubscribes?
                channel.QueueDeclare(queue: "messages", durable: false, exclusive: false, autoDelete: false,
                    arguments: null);
    
                //prefetchSize: The server will send a message in advance if it is equal to or smaller
                //in size than the available prefetch size. 0 means "no specific limit.
                //The prefetch-size is ignored if the no-ack option is set.
    
                //prefetchCount: Max unack messages before broker sends new message to consumer
                channel.BasicQos(prefetchSize: 0, prefetchCount: 1, global: false);
    
                var consumer = new EventingBasicConsumer(channel);
    
                consumer.Received += (obj, evn) =>
                {
                    var message = Encoding.UTF8.GetString(evn.Body.ToArray());
                    Console.WriteLine($"Received message: {message}");
    
                    channel.BasicAck(deliveryTag: evn.DeliveryTag, multiple: false);
                };
    
                channel.BasicConsume(queue: "messages", autoAck: false, consumer: consumer);
    
                Console.ReadLine();
            }

Pub/Sub:
  1. Delivering same message (duplicatng) to multiple consumers.
  2. In Competing pattern, one message share across multiple consumers.
  3. Decoupling Producer from Consumer.
  4. Temporary queues, Consumer creates temporary queues and then it is destroyed.

    1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    //Producer
            static void Main(string[] args)
            {
                var factory = new ConnectionFactory() { HostName = "localhost" };
                var connection = factory.CreateConnection();
                var channel = connection.CreateModel();
                
                channel.ExchangeDeclare(exchange: "pubsub", type: ExchangeType.FanOut);
    var data = Encoding.UTF8.GetBytes("Hi There"); //Default Exchange channel.BasicPublish("pubsub", "", null, data);
            }
    
            //Consumer
            static void Main(string[] args)
            {
                var factory = new ConnectionFactory() { HostName = "localhost" };
                var connection = factory.CreateConnection();
                var channel = connection.CreateModel();

                channel.ExchangeDeclare(exchange: "pubsub", type: ExchangeType.FanOut);
    // queue: The name of the queue. Pass an empty string to make the server generate a name. // durable: Should this queue will survive a broker restart? // exclusive: Should this queue use be limited to its declaring connection? Such a queue will be deleted when its declaring connection closes. // autoDelete:Should this queue be auto-deleted when its last consumer (if any) unsubscribes? var queueName = channel.QueueDeclare.QueueName; var consumer = new EventingBasicConsumer(channel);

                channel.QueueBind(queue: queueName , exchange: "pubsub", routingKey:"");
                consumer.Received += (obj, evn) =>
                {
                    var message = Encoding.UTF8.GetString(evn.Body.ToArray());
                    Console.WriteLine($"Received message: {message}");
                };
    
                channel.BasicConsume(queue: queueName , autoAck: true, consumer: consumer);
    Console.ReadLine(); }

     

ACID properties

                  

Scenario: ACID properties for DB systems

Solution:

A transaction is a sequence of operations that are executed as a single unit of work, and a transaction may consist of one or many steps. A transaction access data using read and write operations.

Transaction guerentees the  integrity and consistency of the data. If a transaction succeeds, the data that were modified during the transaction will be saved, on error the change to data will not be applied.

Atomicity [Transaction Manager]

A transaction must be an atomic unit of work, wither all the data is modified or none. The transaction should be completely executed or fails completely, if one part of the transaction fails, all the transaction will fail.

Ex: During money transfer if money is going out from account A and to account B, both operations should be executed together, and if one of them fails, the other will not be performed and the earlier ones rolled back or not applied. 

Consistency [DB Engineer]

The transaction maintains data integrity constraints and so data is in consistent state (enforced by  constraints, cascades, and triggers). The database before and after the transaction should be consistent. If a transaction makes data in invalid state, the transaction is aborted and an error is reported.

Ex: If we try to add a record in a User table with the ID of a department that does not exist in the Department table, the transaction will fail. Or as in example for Atomicity if the user A's money is deducted and its not credited to user B then the amounts would be inconsistent.

Isolation [Application programmer]

The transaction should  not be changed by any other concurrent transaction, so each transaction in progress will should not be interfered by any other transaction until it is completed.
A transaction before altering any row is locked (shared or exclusive) on that row, disallowing any other transaction to act on it. The folowing transactions need to wait until the first one either commits or rollbacks.

Ex: There is only one stock available for a Product on online store the two shoppers buying at the same time only the first user's transaction finishes and the transaction of the other user be interrupted and not go through.


Durability [Recovery Manager]

Once a transaction is completed and committed, its changes are persisted permanently in the database. The data saved in the database is immutable until another update or deletion transaction affects it.

So when transaction is committed, it will remain in this state even if there are any issue later like m/c crash system restart etc. The completed transactions are recorded on permanent memory devices like hard drives, so the data will be always available even when the DB instance is restarted.

Database options for system design

                 

Scenario: Types of Databases available based on requirements

Solution:

Two types of Database Indexes:
  1. LSM trees + SS Tables
    1. Balanced binary tree in memory. When it gets big its pushed to disk (SS Table - Sorted list of keys).
    2. If there are many SS Table then they are merged.
    3. Fast writes to memory.
    4. For reads, one needs to search many SS Tables.
  2. B trees
    1. Binary trees using pointers on disk.
    2. Pages on disk with range of keys. Write iterates through the binary tree and either update an existing  key value or create a new page on disk and modify pointer to the new page.
    3. Faster reads, as it knows where key is located.
    4. Slow write to disk.
Replication
  1. Single leader
    1. All writes to one master which replicates to others. Read from any.
    2. There are no conflicts.
    3. Can be mitigated with shards and partition.
  2. Multi leader
    1. Go to small subset of leader DBs. Read from any.
    2. Increase in write throughput but write conflicts could occur. 
  3. Leaderless
    1. Write to all, read from any.
    2. Increase in write throughput but write conflicts could occur. 
SQL Databases
  1. Relational/Normalized data.
  2. It requires 2 phase commits
    1. Check each node if it's able to promise to carry out the update
    2. Commit, actually write the data.
    3. If any node is unable to make that promise, then the coordinator tells all nodes to rollback, releasing any locks they have, and the transaction is aborted. 
  3. ACID guarantees
  4. Slow due to #2. Transactions are slow.
  5. Use B trees.
  6. Relational database or RDBMS databases are vertically Scalable.
  7. Expensive.
Mongo DB
  1. Its document DBs. Documents can be nested.
  2. NoSql
  3. Use B trees and supports transactions.
  4. With NoSQL, unstructured & schema less data can be stored in multiple collections and nodes. It does not require fixed table schemas and it allows limited join queries, and can be scaled horizontally.
  5. Relatively cheaper.
  6. No Stored Procedure.
Apache Casandra
  1. No Sql
  2. Wide column data store (like excel spreadsheet)
  3. Shard key and Sort key
  4. Multileader/leaderless
    1. Fast writes. Quorum. Last write wins. 
  5. Based on LSM & SS Tables.
  6. Good for high write volume and consistency not that vital. All write and reads go to same shard. Like chat application.
  7. No transaction. 
Redis and Memcached
  1. Key - value pair in memory
  2. Cache and geo spatial index. etc.

Cloud Best Practices

                

Scenario: Cloud Best Practices

Solution:

1. Enable Scalability - Design stateless apps.
2. Data Store Solution - RDBMS engines, NoSql data stores which are search optimized.
3. Disposable Resources - Dockerize containers. 
4. Automate Environment - Automation for scalability, consistency and availability.
5. Services vs Servers- Managed Services and Server less applications better reliable and cost effective.
6. Single point of failure - Must handle Zone & DC failures.
7. Optimized for Cost.
8. Security - Data at rest as well as transit.
9. Make data actionable.

Move Github Sub Repository back to main repo

 -- delete .gitmodules git rm --cached MyProject/Core git commit -m 'Remove myproject_core submodule' rm -rf MyProject/Core git remo...