Chapter 3. — Sequential Logic

nathan brickett
5 min readJan 3, 2021

We continue on in our journey, today is Chapter 3: Sequential Logic of Building a Modern Computer From First Principles. Last time we took a look at Boolean Arithmetic and we saw how we could start to create the ALU using these arithmetic chips. These chips are called combinational chips because they compute functions that depend solely on different combinations of their given input values. These relatively simple chips are able to provide a lot of the important processing functions, but the one thing they cannot do is maintain state. Obviously it does us no good to compute a value and then lose it. We need to be able to compute a value and then store and recall this data later. We create memory elements that are built from chips called sequential chips. The actual implementation of these memory elements is an intricate process involving synchronization, clocking, and feedback loops. However, most of this complexity can be achieved in the operating logic of very low level sequential chips called flip-flops.

We know from our own lives that the act of remembering something is dependent on time. We create memories in the present and then access them a period of time later. At that later time, we are remembering now what we committed to memory before. Computers act in the same way, in order to remember any information they must have a standard means for representing this progression of time.

Most computers have a master clock that handles the passage of time, it delivers a continuous train of alternating signals to accomplish this. The hardware itself is based on an oscillator that alternates continuously between two phases labeled 0–1, tick-tock, etc. The elapsed time between these two alternating signals (beginning of ‘tick’, to the end of ‘tock)’ is called a cycle and we use a cycle to model one discrete time unit. This tick-tock clock phase is represented by a binary signal which is then broadcast to every sequential chip throughout the computer.

There are several variants of the ‘flip-flop’, the focus of this chapter is on a ‘data flip-flop’ (DFF), whose interface consists of a single bit data input and output. In addition the DFF has a clock input that continuously changes according to the master clocks signal. This allows the DFF to output the input value from the previous time unit; out(t) = in(t-1), where in and out are the gates input and output values and t is the current clock cycle. This elementary behavior is the basis for all hardware devices that computers use to maintain state.

The storage device that stores our inputs or ‘remembers’ a value over time is called a register, it implements the classical storage behavior out(t) = out(t-1). In order for the DFF to be able to differentiate between whether it should store a value or accept a new value is accomplished with a multiplexor (this is a device that takes in multiple signals and outputs one single signal). This multiplexor has a ‘load bit’ and when we want the register to start storing a new value we will send this value to the ‘in’ input and set the ‘load bit’ to 1. If we wanted the register to continue to store its value, we will set the ‘load bit’ to 0. This is the basic mechanism for remembering a single bit over time and this allows us to construct registers of any arbitrary size. This is done by forming an array of as many single bit registers as needed and creates a register that can now hold multi-bit values. A basic design parameter of a register is it’s width, which represents the number of bits that it contains (16, 32, 64) and these values that are represented as multi-bit content are referred to as words.

The Random Access Memory (RAM) unit is built from stacking together many registers. The term random access memory has to do with the read/write operations. We require that the RAM will access any random (word) value in memory with equal speed. How does it accomplish this? We assign each word in the n-register RAM a unique address (any integer between 0 and n-1 registers), this will be the key to access this word. We then need to implement a gate logic design that given this address, is capable of selecting the register that contains this address. To summarize a RAM device accepts three inputs: a data input, an address input, and a load bit. The address tells the computer which RAM register should be accessed. If the load bit is set to 0, it is a read operation and the RAM’s output would be the value of the selected register. If the load bit is set to 1, it is a write operation and the selected register will commit the new input value at the end of the next tick-tock cycle, at which point the RAM’s new output will be this new value.

An important piece is the counter. This is a sequential chip whose state is an integer number that increments every cycle of the time unit (tick-tock). The counter can be used to add a constant to any function. Typical CPU’s have a program counter whose output is used to interpret the address of the next instruction that should be executed.

How do we ensure the timing of all of these chips? All of these sequential chips embed one or more DFF gates, which allows these chips to either maintain state (memory units) or operate on state (counters). This is accomplished through feedback loops inside of the sequential chip, which ensures that their output values only change at the point of transition from one clock cycle (tick-tock) to the next, and not within the cycle itself. Sequential chips are unstable during these clock cycles and we only require that at the beginning of the next cycle that they output correct values. This is different from a combinational chip which has no concept of time. A combinational chip changes when their inputs change irrespective of time. If we wanted the ALU to compute two values x and y, and these two values are located in two separate physical locations in the RAM, then due to various physical constraints (distance, resistance, interference, etc.) these signals will likely arrive at different times.

This means unless we correct for this, the ALU will not be able to stabilize the correct output until it receives both signals, until then we are collecting garbage. In order to synchronize this process we build the computer’s master clock so that the length of the clock cycle will be slightly longer than the time it takes a bit to travel the longest distance from any two points in the system. This guarantees that by the time the sequential chip updates its state on the next clock cycle that the inputs it has received are valid.

--

--