[Hans Nielsen] has a couple roommates, and his garage has become a catch-all for various items. And like any good hacker’s garage, it boasts an IoT controlled garage door opener. It had a problem though, it used a Particle Photon – a popular IoT board that required internet access and a web server to operate. So [Hans] raided his roommate’s spare parts bin and set-forth to rebuild it!
One of his main goals was to make something that did not require internet access to operate. Anyone connected to the local WiFi should be able to open and close the door via a web interface, and he would give our good friend [Linus Torvalds] a call to make it happen. The key component in the build is the C.H.I.P SBC that made the news a while back for being ridiculously cheap.
Be sure to check out [Han’s] blog if you’re at all interested in working with the C.H.I.P. He does a fantastic job of documenting the ins and outs of getting a project like this working.
Another instant classic video from Big Clive, who does a great job at examining some fire sprinkler heads he picked up off eBay for a couple of bucks each. For whatever reason in my own life I’ve only ever looked at, examined, or noticed red-liquid sprinkler heads, and so I was completely unaware of the color coding system, which as Big Clive points out is more for easy identification than operation – the temperature breaking point is actually regulated by the bubble in the bulb!
This is a pocket sized control box enclosure for a Stranger Things style LED string controller.
This is a buildable project – the electronics are standard parts and modules that are available on eBay. The wiring hookup diagram and Arduino software is included in this post. An Android or iPhone smartphone with Bluetooth is needed to use this control box. The way it works is a smartphone app is used to send text strings to the control box, which then blinks the LEDs appropriately. The instructions for the app and the control box commands are at the top of the Arduino software listing.
Every Thursday is #3dthursday here at Adafruit! The DIY 3D printing community has passion and dedication for making solid objects from digital models. Recently, we have noticed electronics projects integrated with 3D printed enclosures, brackets, and sculptures, so each Thursday we celebrate and highlight these bold pioneers!
Have you considered building a 3D project around an Arduino or other microcontroller? How about printing a bracket to mount your Raspberry Pi to the back of your HD monitor? And don’t forget the countless LED projects that are possible when you are modeling your projects in 3D!
The Adafruit Learning System has dozens of great tools to get you well on your way to creating incredible works of engineering, interactive art, and design with your 3D printer! If you’ve made a cool project that combines 3D printing and electronics, be sure to let us know, and we’ll feature it here!
During the academic year of 2016-2017 at McMaster University, in conjunction with Dr. DeBruin, Christina Riczu, Thomas Phan and Emilie Corcoran, we developed a compact, battery powered, 12-lead electro-cardiogram. The project won 1st place in the biomedical category at the ECE Capstone Poster Day.
As the saying goes, hindsight is 20/20. It may surprise you that the microchip that we all know and love today was far from an obvious idea. Some of the paths that were being explored back then to cram more components into a smaller area seem odd now. But who hasn’t experienced hindsight of that sort, even on our own bench tops.
Let’s start the story of the microchip like any good engineering challenge should be started, by diving into the problem that existed at the time with the skyrocketing complexity of computing machines.
The Problem: Tyranny Of Numbers
The ENIAC computer contained about 20,000 vacuum tubes and around 90,000 other components, all wired together using 5,000,000 hand-soldered joints. By 1956, one tube would burn out every two days and it would take 15 minutes to find it. All that meant that the longest continuous run time was just short of five days, a far cry from today’s computers which remain on for their lifetime.
The germanium transistor, the successor to the troublesome vacuum tube, was invented in the late 1940s, followed in 1955 by the silicon transistor. By 1955 the first all transistor computer, the Harwell CADET, was released. However, it used only a modest 324 point-contact transistors. Nonetheless, the switch from vacuum tubes to transistors for computers had begun and the low power requirement and low heat of the new transistor meant that computers could be made more capable and complex.
To minimize complexity, the hardware was broken up into modules. Several modules might work together to function as an adder. However, each module was a circuit board that had to be hand-soldered. This made them prone to failure. Plus, these modules had to be wired together with masses of cables along with their connections, another source of failure.
These problems caused by the quantity and complexity, as well as the resulting size and weight of the computer, were known as the tyranny of numbers, and were seen as an impedance to advancing to more complex circuits.
However, where there’s a problem, there’s usually a solution. As you’ll see, some went nowhere and some succeeded beyond the engineer’s wildest imagination.
One solution to the high failure rate was to add redundancy to circuits. For example, a radio would have an extra circuit built in. But this just made the overall circuit larger when size was already an issue.
The US Army favored a solution involving Micro-Modules, wherein each electronic component would exist on a small ceramic square which could then be connected to other such squares much like snapping together blocks of LEGO. The US Navy had a similar solution in Project Tinkertoy.
Some solutions actually involved making integrated circuits, but went nowhere. In Germany, in 1949, Werner Jacobi of Siemens AG filed a patent for something very much like an IC, consisting of five transistors on a common substrate as a three-stage amplifier for a small and cheap hearing aid, but it didn’t result in any commercial use. Similarly, Geoffrey Drummer of the Royal Radar Establishment in Britain came up with the idea for an integrated circuit in 1952, but the UK military couldn’t envisage a use for it and UK industry were unwilling to invest in it.
Jack Kilby’s Solution While At Texas Instruments
The integrated circuit that did lead to something was Jack Kilby’s at Texas Instruments (TI). Jack had recently joined TI and was unhappily working on the US Army’s Micro-Modules solution — unhappily because he was an engineer who enjoyed solutions that solved the right problem. He saw the tyranny of numbers problem as a problem of having too many components. Micro-Modules didn’t reduce the number of components.
TI had invested a lot of money in working with semiconductors for their transistors, learning how to purify them and how to add impurities to dope them. So he wondered if a solution could be found using semiconductors. He figured you could make resistors and capacitors. And once he’d concluded that, he hit on his idea: make all of the circuit out of one material in one monolithic block. The result is condensing the problem into just one single component.
And so on July 24, 1958, he did what any good engineer would, he wrote his monolithic idea in his notebook. When he showed his notes to Willis Adcock, who’d recruited him to TI, he got permission to try making a resistor and a capacitor, which he did. That got him permission to make a full circuit. It was decided that he’d make a phase-shift oscillator circuit, a circuit containing resistors, capacitors, and a transistor that work together to output a sine wave.
By September 12, 1958 he was ready to demo it. The components were made from a single germanium substrate 7/16 inches long and 1/16 of an inch wide, glued to a glass slide to keep it flat. Wires connected the components together. A group of executives gathered around him and his tiny device in a lab. He adjusted an oscilloscope, pushed a switch, and a perfect green sine wave snaked continuously across the scope’s screen. A good solution to the tyranny of numbers had been found.
Robert Noyce’s Solution While At Fairchild
Robert Noyce was co-founder of Fairchild Semiconductor when he came up with his solution. But unlike Kilby, Noyce didn’t solve it by attacking the problem directly. He’d thought a lot about the problem but hadn’t gotten anywhere.
Then, in 1958 Jean Hoerni, also at Fairchild, came up with the planar process, a way of protecting transistors on silicon wafers by covering them with a layer of silicon oxide. Fairchild’s patent lawyer decided he wanted to go broad with the patent for the process and asked Noyce to come up with other uses. And this he did, over a number of days, each time going over his ideas on a blackboard with fellow co-founder, Gordon Moore.
First Noyce thought about how thin wires could be poked down through the silicon oxide layer to connect to the transistors, the layer keeping the wires in place. Then he thought, why use wires at all. Why not print lines of copper wires right on the oxide layer? But then he took it even further and asked why not connect transistors together using these printed lines of copper? And then finally he asked himself, why stop at transistors. Why not make resistors and capacitors too and build an entire integrated circuit?
From that stream of ideas he’d come up with a solution to the tyranny of numbers. By January 23, 1959, his idea for the integrated circuit filled four pages of his notebook.
And so Jack Kilby at TI and Robert Noyce at Fairchild had independently invented the integrated circuit.
As you’d expect, with two independent inventors, patent fights ensued.
Kilby and others at TI had made many improvements for putting components on a common substrate but they hadn’t solved the problem of connecting them together. They still used thin gold wires. When it came time for making a drawing for the patent, all they had to go on was a version with gold wires “flying” through the air, connecting the components. But just in case, they added a paragraph to the patent about the possibility of evaporating on a silicon oxide layer and that
“Electrically conducting material such as gold may then be laid down
on the insulating material to make the necessary electrical circuit
Fairchild filed a few months after TI but their patent made it through the system sooner, on April 26, 1961. And of course, a battle ensued. By November 1969 the ruling was decided based on wording in TI’s patent, specifically on the words “laid down” with respect to how the gold was applied to the insulating material. It was argued that “laid down” had no clear meaning. In Fairchild’s patent they’d used the words “adherent to” instead. It was argued that between the flying wires in TI’s patent drawing, and the laying down of the wires on the oxide layer, no one could build an integrated circuit based on TI’s patent. The ruling came down in Fairchild’s favour.
But the dates in their respective notebooks showed that Kilby came up with the idea first. The character of the two men was such that they credited each other anyway.
NASA And The Military’s Crucial Funding
When Fairchild and TI started releasing IC products in 1961, an IC containing a few transistors, diodes and resistors cost $120. And so there was no rush to buy them. However, better production techniques that would lower prices couldn’t be developed without higher sales.
It was in part the space race that saved the day. There was a need for a guidance and navigation system which included a computer that could rapidly guide a rocket through different atmospheres and to a precise landing on the moon. And of course that computer had to be lightweight. For that, the government was willing to bear the high price.
As already pointed out, the US military had also been working on the tyranny of numbers problem with their Micro-Modules and Tinkertoy projects, but with little success. Defense systems, such as the Minuteman Missile, required compact and lightweight circuits. And of course the military’s pockets were also deep enough to pay the high price. So the fledgling integrated circuits industry had the military as a second customer.
And so NASA and the military provided the necessary sales such that by 1971, the average price for a chip was $1.27. They also proved that the product worked.
New Markets Take Over
Slowly ICs started finding uses. In 1964, the UNIVAC 1108 computer’s integrated control register stack was implemented using ICs but it wasn’t until the UNIVAC 1110, introduced in 1972, that much of the discrete logic was replaced with TTL ICs. The Burroughs 6500 in 1969 used hybrid ICs, combining discrete transistors and integrated circuits on a single substrate.
Noyce and Moore founded Intel in 1968 where they pursued memory chips. Their 1 KB memory chips had small sales in the first year but by 1973, with the 4 KB chips, sales had reached $60 million.
Intel also released the first microprocessor chip, the 4004, in 1971. Meanwhile, the Japanese released the first pocket calculator using ICs in 1970. The Canon Pocketronic calculator released in 1971 was based on the a TI project codenamed Cal Tech that Kilby had worked on in the 1960s. Integrated circuits had taken off.
Links to sources have been spread throughout this article, but the article’s heart drew from the very enjoyable book The Chip, by T.R. Reid. It carries you on a tale from Thomas Edison’s explorations in thermionic emissions to the awarding of the Nobel prize in Physics to Jack Kilby on December 10, 2000. Beware, the book is hard to put down at times.