Monthly Archives: December 2010

Futures in Neuromorphic Computing

Which chip will emerge the victor in the new race to beat Moore’s law and finally give us the intelligent machines weve been told are going to be in our future; by being out-competed or forming a marriage of convenience , its still far too early to tell.Briefly some background as i may be running off with the premise of this piece even before the starting gun.


The state of transistor tech that has sustained the electronics and computer industry for the past 20 plus years has grown by leaps and bounds (thank you Moore’s Law) enabling massive computational devices to proliferate at a fraction of the cost that they would have been had at in the preceding year.And even from the earliest times when a PC took up an entire room and drew as much power as a small town , the dream of AI has been slowly gaining traction.However it was realised early on that the Positronic brains we so desire for out robot would not be realised by the current hardware at hand .Fast forward to the present to where the problem still persists , no matter how many processor cores one throws at it the crop of supercomputers built to simulate an artificial intelligence still hold to that same principle of a large roomful of boxes drawing enough power to a small town (the more things change).However a fundamental difference with the earl efforts in AI research is with advances in neuroscience we know better how the functioning of the brain can be possibly simulated by artificial means .

The hardware side of AI research has shown that a fundamental flaw in the model being the von Neumann architecture.
<”von Neumann architecture is a design model for a stored-program digital computer that uses a central processing unit (CPU) and a single separate storage structure (“memory”) to hold both instructions and data.The separation between the CPU and memory leads to the von Neumann bottleneck, the limited throughput (data transfer rate) between the CPU and memory compared to the amount of memory. In most modern computers, throughput is much smaller than the rate at which the CPU can work. This seriously limits the effective processing speed when the CPU is required to perform minimal processing on large amounts of data. The CPU is continuously forced to wait for needed data to be transferred to or from memory.”-Wiki>

This is functionally different from the way that a brain will organise its information let alone process it

<”A biological brain is able to quickly execute this massive simultaneous information orgy—and do it in a small package—because it has evolved a number of stupendous shortcuts. Here’s what happens in a brain: Neuron 1 spits out an impulse, and the resultant information is sent down the axon to the synapse of its target, Neuron 2. The synapse of Neuron 2, having stored its own state locally, evaluates the importance of the information coming from Neuron 1 by integrating it with its own previous state and the strength of its connection to Neuron 1. Then, these two pieces of information—the information from Neuron 1 and the state of Neuron 2’s synapse—flow toward the body of Neuron 2 over the dendrites. And here is the important part: By the time that information reaches the body of Neuron 2, there is only a single value—all processing has already taken place during the information transfer. There is never any need for the brain to take information out of one neuron, spend time processing it, and then return it to a different set of neurons. Instead, in the mammalian brain, storage and processing happen at the same time and in the same place.” – Spectrum IEEE>

This brings us to the first of the next generation processing elements based on memristor technology.


From the ground up a memristor , whose existence was theorised in the 70’s and actualised by HP labs in — in application is like a FPGA ; realising functions that need several transistors in a CMOS circuit with the added advantage of non-volatile memory (no power required for state refreshing) and a structure that is remarkably defect-tolerant.

The memristor layer interacts with the CMOS logic layer of the hybrid chip and according to the circuit configurations is able to realise any number of logic gate structures.The process of creating the hybrid chip leaves the underlying CMOS layer untouched , redundant data paths of the crossbar architecture allow routing around defective areas.In neuromorphic computing application the memristor as synapse and transistors as the neurones unsupervised learning becomes an actual possibility.A current work in progress by Boston University , MoNETA where the aim is to realise a general purpose AI able to adapt to solving a problem without prior training , which essentially boils down to a brute force technique with little room for creative problem solving.Using hundreds of normal PE cores sandwiched in a memristor layer where memory is localised to a super-cache immediately accessible and relying on very little power to maintain the information.

The software in this case for modelling the neurological topology is being handled by Cog Ex Machina a special purpose Os.


The next contender to the plate of a nueromorphic chip is the Chaogate.I must confess I’m particularly attached to this one , and not just because of butterfly’s.Partial differential equations and the way their solutions arise bring some of the most beautiful patterns , and i like to think brains work similarly if we could only see.As far as chip construction is concerned a new type of gate the Chaogate has been developed recently able to reconfigure itself to provide different logic gates – hence chaogates.Different from FPGAs where switching between RCLG’s achieves reconfiguration chaogates morph via the pattern inherent in their constitutive nonlinear element. Modern computers depend on boolean logic of which any logical operation can be realised by NOR and NAND gates.The chaotic processor is taken as a 1D system whose state is represented by x and dynamics given by non-linear map f(x) , if necessary and sufficient conditions are satisfied by f(x) simultaneously it is able to implement the full set of logical operations.

It also becomes possible to implement combinational logic directly, case in point the half adder involving two AND gates (for the carry) and XOR (sums 1st digit) is implementable with one 1D chaotic element.And a full adder requires three iterations of the single chaotic element giving us efficient computational modules without cascading.

Development by ChaoLogix using standard CMOS techniques has led to an array with: a morphing ALU giving higher functions (multiplier and adder ) in less than 1 clock cycle and communication protocols morphing between 2 different communication protocols in less than 1 clock cycle ( synchronous serial data link or serial computer bus).Arrays can be conceivably be programmed on the run , with threshold values being sent from an external program for optimisation of the task at hand.

Current efforts are aimed at optimisation of the design of a chaogate to sizes similar or smaller to NAND gates , and as a caveat the developers add that programming the chaogates will require development of a new hardware description language , whose scarcity at the moment lends ideas from evolutionary algorithms to be considered as viable alternatives to achieve optimal array configurations.


While focusing on the hardware advances in recent months on the software side of things Numenta deserves a nod for its work in recreation of a workable model of the human neocortex using its HTM approach.On the open-source side dust seems to be gathering with the last activity on projects like OpenAI being about four years ago.

With recent advancements tackling the whole problem of AI from a new perspective its high time a proper open stack was available to enable the faintest vestiges of consciousness to be breathed into our computers.So say we all.


Image credits.”Positronic Brain” by Fernando Laub [
“Optical Micrograph of CMOS chip with memristor ” [ Nano Lett., 2009, 9 (10), pp 3640–3645 DOI: 10.1021/nl901874j]
“Chaogate Element” –  American Institute of Physics.[doi:10.1063/1.3489889]
Leave a comment

Posted by on December 26, 2010 in code, Harware, Processors


SBC(Single Board Computer) using 8086

The aim is to design and model an 8086 based computer and add several interfaced peripherals to it.The system has been designed to meet the following requirements :

Total 32Kx16 SRAM

Total 64Kx16 EPROM

I/O Ports Parallel

I/O Ports analog-digital

a block diagram of the system showing the functional units relationships to each other .The desccriptions that follow are based on this model and willthus be referrd to as per the module it currntly appears in And below is a preliminary sketch of the completed system.CAVEAT:This is how the completed system will look like however pin outs may change regarding the availiability of ICs.

I wıll be going through the different modules step-by-step in the upcoming posts .

ARES schematic

ARES Achematic of SBC

Leave a comment

Posted by on December 17, 2010 in Harware, Processors, Uncategorized