Broadcast Engineer at BellMedia, Computer history buff, compulsive deprecated, disparate hardware hoarder, R/C, robots, arduino, RF, and everything in between.
5270 stories
·
5 followers

STM32 Clones: The Good, The Bad And The Ugly

1 Share

Whenever a product becomes popular, it’s only a matter of time before other companies start feeling the urge to hitch a ride on this popularity. This phenomenon is the primary reason why so many terrible toys and video games have been produced over the years. Yet it also drives the world of electronics. Hence it should come as no surprise that ST’s highly successful ARM-based series of microcontrollers (MCUs) has seen its share of imitations, clones and outright fakes.

The fakes are probably the most problematic, as those chips pretend to be genuine STM32 parts down to the markings on the IC package, while compatibility with the part they are pretending to be can differ wildly. For the imitations and clones that carry their own markings, things are a bit more fuzzy, as one could reasonably pretend that those companies just so happened to have designed MCUs that purely by coincidence happen to be fully pin- and register compatible with those highly popular competing MCU designs. That would be the sincerest form of flattery.

Let’s take a look at which fakes and imitations are around, and what it means if you end up with one.

Anatomy of a forgery

Good STM32 IC on the left, clone on the right, with extra dimples.

Earlier this year, Keir Fraser posted an informative summary of some fake STM32F103 ICs as found on so-called ‘Blue Pill’ and similar boards on their Github. The forgeries carry the same marks on the packaging as the genuine STM32 parts, but can often be identified by the pattern of dimples on the packaging, or by the quality of the silkscreen.

These forgeries aren’t always fully functional. As noted by Fraser, many of these parts cannot even be programmed properly, or even run code as simple as the universal ‘blinky’ example. It’s possible that these forgeries are in fact defective STM32F103 dies or similar that are being sold via less-than-legal channels.

The STM32FEB. STM32 it is not.

More insidious perhaps are the near-forgeries that at first glance may look like the real part, but are betrayed by the identification on them: ‘STM32FEBKC6’. That’s not a legitimate ST parts code, and that should be the first tip. This is another clone that’s likely to bring you nothing but grief, as even when it does work, it is a cut-down version of the STM32F103 design, with missing features. Finding detailed information on it is hard as well.

Good artists copy

CS32F103. A more honest clone.

This leaves the trickiest of the clones, in the form of the aforementioned CS32F103. This clone essentially works like the real deal, and can run Blinky compiled for the STM32F103 just fine. Some of these MCUs may even be marked as the ST part, making them hard to identify conclusively.

Some of these are manufactured by CKS (中科芯微), a Chinese company who have apparently made a feature-complete version of the STM32F103, to the point where they have fixed some of the errata listed in the ST datasheet. An article over at CNXSoft provides some more details on this MCU.

A major difference one will quickly encounter with this chip is when programming it and getting the message "UNEXPECTED idcode: 0x2ba01477". The reason for this is that the STM32F103 MCU reports the ID 0x1ba01477, confusing the programmer. This can be fixed for example in OpenOCD by using a configuration script that specifies either no CPUTAPID (0), or this ID reported by the CS32 MCU.

Giga clones

Probably one of the more famous STM32 clone makers is GigaDevice with their GD32 MCUs. As noted over at SMD Prutser in an article series, the GD32F103 appears to be a faster, more capable version of the STM32F103. It has a higher maximum clock speed and faster Flash storage, with a decapped unit showing that they used two dies inside the package. One for the MCU, and one for the Flash storage, allowing for a rather flexible way to change Flash sizes across their product range.

Decapped GD32F103 MCU. The separate Flash die is visible on top.

At first glance the GD32 MCUs look more attractive than the STM32F1 series, with significant increases in clock speed (72 versus 108 MHz) and Flash storage. While the Flash storage on the GD32 should be very slow, being a serial SPI ROM, its use of SRAM on the MCU die to ‘cache’ the Flash storage means that it ends up being much faster than on-die Flash storage, with zero wait states required even at full MCU clock speed.

A disadvantage of more SRAM instead of pure Flash is that it increases power usage, especially in sleep mode. It also causes a (small) boot-up delay when the SPI ROM’s contents are copied into SRAM before the firmware can run. Depending on the application this may be an advantage or disadvantage. This is of course the same approach as the ESP8266 MCU, which also uses an external SPI ROM for firmware.

When it comes to other GD32 devices, however, they seem to be less eager to make direct clones. Their GDF303 MCU kept the same peripherals as the GDF103, even though those of the STM32F3 are arguably better. This also prohibits their use as a drop-in solution for STM32F3xx boards. Depending on one’s opinion of the STM32F1 peripherals, this may also affect one’s decision to use those GD32 MCUs.

They’re everywhere

Genuine CH32F103 MCU on Blue Pill board.

Although I was aware of the aforementioned fakes and clones, I nevertheless came across a new one recently. This involved the purchase of some ‘Blue Pill’ STM32F103 boards from a big German importer and reseller of all kinds of Maker tat. I wasn’t proud of this, but I needed some cheap boards to use for BlackMagic probes, and they had a good deal. In the comments for the Amazon listing some people mentioned they got a genuine boards, while others mentioned that it was a ‘fake’.

In the spirit of morbid curiosity, I got a couple of these boards and was both horrified and pleased to see that I had in fact received Blue Pill boards that did not carry the promised STM32F103C8T6 MCU, but instead one marked CH32F103C8T6. On the bright side it did not claim to be an ‘ST’ part.

Genuine STM32F103 MCU on Blue Pill board.

This CH32F103 MCU is produced by a Chinese company called WCH, with the (Chinese-only) datasheets and reference manual both provided for download. At a cursory glance, both the datasheet and manaul show a chip that’s practically identical to the STM32F103, with identical memory mapping and peripheral registers.

Hooking it up to an ST-Link/V2 dongle and connecting to it with OpenOCD results in the same CPUTAPID error as with the CS32F103 MCU when using the STM32F1xx profile file. After making the same change to the stm32f1xx.cfg file as suggested by others, I was able to flash the ‘Blinky’ example from my Nodate STM32 project onto the board without further issues.

This suggests that at least the basic RCC (reset & clock control), GPIO and SysTick functionality is similar enough for such a basic test to work. Next, I’ll have to explore whether it also handles the USART, DMA, SPI, I2C and I2S functionality the same way as the STM32F103 MCU that I have on a few other boards. If this MCU is anything like the CS32F103 part, the answer is probably ‘yes’.

As for the seller’s response when I contacted them about these Blue Pill boards not featuring the advertised STM32 part, they admitted that they were aware of this and claimed that ‘in two months’ they’d have boards with genuine STM32 parts again. Admittedly that raises a lot more questions than it answers, least of all why they’d knowingly sell boards that do not feature the advertised MCU.

Time to panic?

The eagle-eyed among us may have noticed that virtually all of these clones involve ST’s first-generation Cortex-M MCUs (STM32F1 series). Unless you need to buy Blue Pill boards for commercial projects, this is unlikely to do more than seriously annoy hobbyists and others who like to have a stack of $3 Cortex-M3 boards around for random projects. If one orders MCUs and development boards from reputable sellers such as Digikey and Mouser, it’s also unlikely to be much of a concern.

The Blue Pill and Black Pill boards are also seeing a bit of an overhaul recently with updated versions featuring STM32F4-based MCUs. Although a bit more expensive than the STM32F103-based counterparts, they do bring considerably more resources to the table and the much more pleasant (in my opinion) peripherals of the STM32F4 line. These may just make the market for the STM32F103 and with it these countless clones, counterfeits, and copies dry up.

Until the first batches of counterfeit, cloned and copied STM32F401 and STM32F411 MCUs hit the market, naturally. Because that’s apparently the name of the game.

Read the whole story
tekvax
2 days ago
reply
Burlington, Ontario
Share this story
Delete

Documentary on Obama White House photographer Pete Souza

1 Share

Pete Souza took some of the most historically significant photos of both Ronald Reagan and Barack Obama while both served as president. He talks about what it takes to be a good documenter of a President in this profile.

Be sure to check out The Way I See It if you want to learn more about his interesting career. — Read the rest

Read the whole story
tekvax
2 days ago
reply
Burlington, Ontario
Share this story
Delete

Ethernet At 40: From A Napkin Sketch To Multi-Gigabit Links

1 Share

September 30th, 1980 is the day when Ethernet was first commercially introduced, making it exactly forty years ago this year. It was first defined in a patent filed by Xerox as a 10 Mb/s networking protocol in 1975, introduced to the market in 1980 and subsequently standardized in 1983 by the IEEE as IEEE 802.3. Over the next thirty-seven years, this standard would see numerous updates and revisions.

Included in the present Ethernet standard are not just the different speed grades from the original 10 Mbit/s to today’s maximum 400 Gb/s speeds, but also the countless changes to the core protocol to enable these ever higher data rates, not to mention new applications of Ethernet such as power delivery and backplane routing. The reliability and cost-effectiveness of Ethernet would result in the 1990 10BASE-T Ethernet standard (802.3i-1990) that gradually found itself implemented on desktop PCs.

With Ethernet these days being as present as the presumed luminiferous aether that it was named after, this seems like a good point to look at what made Ethernet so different from other solutions, and what changes it had to undergo to keep up with the demands of an ever-more interconnected world.

The novelty of connecting computers

IBM PCs, connected.

These days, most computers and computerized gadgets are little more than expensive paper weights whenever they find themselves disconnected from the global Internet. Back in the 1980s, people were just beginning to catch up on the things one could do with a so-called ‘local area network’, or LAN. Unlike the 1960s and 1970s era of mainframes and terminal systems, a LAN entailed connecting microcomputers (IBM PCs, workstations, etc.) at for example an office or laboratory.

During this transition from sneakernet to Ethernet, office networks would soon involve thousands of nodes, leading to the wonderful centrally managed office network world. With any document available via the network, the world seemed ready for the paperless office. Although that never happened, the ability to communicate and share files via networks (LAN and WAN) has now become a staple of every day life.

Passing the token

The circuitous world of Token Ring configurations.

What did change was the rapidly changing landscape of commodity network technology. Ethernet’s early competition was a loose collection of smaller network protocols. This includes IBM’s Token Ring. Although many myths formed about the presumed weaknesses of Ethernet in the 1980s, summarized by this document (PDF) from the 1988 SIGCOMM Symposium, ultimately Ethernet turned out to be more than sufficient.

Token Ring’s primary points of presumed superiority were determinism instead of Ethernet’s multiple access with collision detection approach (CSMA/CD). This led to the most persistent myth, that Ethernet couldn’t sustain saturation beyond 37% of its bandwidth.

For cost reasons, the early years of Ethernet was dominated by dumb hubs instead of smarter switches. This meant that the Ethernet adapters had to sort out the collisions. And as anyone who has used Ethernet hubs probably knows, the surest sign of a busy Ethernet network was to glance over at the ‘collision’ LED on the hub(s).  As Ethernet switches became more affordable, hubs quickly vanished. Because switches establish routes between two distinct nodes instead of relying on CSMA/CD to sort things out, this prevented the whole collision issue that made hubs (and Ethernet along with it) the target of many jokes, and the myth was busted.

Once Ethernet began to allow for the use of cheaper Cat. 3 (UTP) for 10BASE-T and Cat. 5(e) UTP cables for 100BASE-TX (and related) standards, Ethernet emerged as the dominant networking technology for everything from homes and offices to industrial and automotive applications.

A tree of of choices

The increased spectral bandwidth use of copper wiring by subsequent Ethernet standards.

While the list of standards listed under IEEE 802.3 may seem rather intimidating, a more abbreviated list for the average person can be found on Wikipedia as well. Of these, the ones one most likely has encountered at some point are:

  • 10BASE-T      (10 Mb, Cat. 3).
  • 100BASE-TX (100 Mb, Cat. 5).
  • 1000BASE-T (1 Gb, Cat. 5).
  • 2.5GBASE-T  (2.5 Gb, Cat. 5e).

While the 5GBASE-T and 10GBASE-T standards also have been in use for a few years now, the 25 Gb and 40 Gb versions are definitely reserved for data centers at this point, with the requirement for Cat. 8 cables, and only allowing for runs of up to 36 meters. The remaining standards in the list are primarily aimed at automotive and industrial applications, some of which are fine with 100 Mbit connections.

Still, the time is now slowly arriving where a whole gigabit is no longer enough, as some parts of the world actually have Internet connections that match or exceed this rate. Who knew that at some point a gigabit LAN could become the bottleneck for one’s Internet connection?

ALOHA

The Xerox 9700, the world’s first Ethernet-connected laser printer.

Back in 1972, a handful of engineers over at Xerox’s Palo Alto Research Center (PARC) including Robert “Bob” Metcalfe and David Boggs were assigned the task of creating a LAN technology to provide a way for the Xerox Alto workstation to hook up to the laser printer, which had also been developed at Xerox.

This new network technology would have to allow for hundreds of individual computers to connect simultaneously and feed data to the printer quickly enough. During the design process, Metcalfe used his experience with ALOHAnet, a wireless packet data network developed at the University of Hawaii.

Metcalfe’s first Ethernet sketch.

The primary concept behind ALOHAnet was the use of a shared medium for client transmissions. In order to accomplish this, a protocol was implemented that could be summed up as ‘listen before send’, which would become known as ‘carrier sense multiple access’ (CSMA). This would not only go on to inspire Ethernet, but also WiFi and many other technologies. In the case of Ethernet the aforementioned CSMA/CD formed an integral part of early Ethernet standards.

Coaxial cabling was used for the common medium, which required the use of the cherished terminators at the end of every cable. Adding additional nodes required the use of taps, allowing for the BNC connector on the Ethernet Network Interface Card to be attached to the bus. This first version of Ethernet is also called ‘thicknet’ (10BASE5) due to the rather unwieldy 9.5 mm thick coax cables used. A second version (10BASE2) used much thinner coax cables (RG-58A/U) and was therefore affectionately called ‘thinnet’.

The best plot wist

Don’t forget to terminate your bus.

In the end, it was the use of unshielded, twisted-pair cabling that made Ethernet more attractive than Token Ring. Along with cheaper interface cards, it turned into a no-brainer for people who wanted a LAN at home or the office.

As anyone who has ever installed or managed a 10BASE5 or 10BASE2 network probably knows, interference on the bus, or issues with a tap or AWOL terminator can really ruin a day. Not that figuring out where the token dropped off the Token Ring network is a happy occasion, mind you. Although the common-medium, ‘aether’ part of Ethernet has long been replaced by networks of switches, I’m sure many IT professionals are much happier with the star architecture.

Thus it is that we come from the sunny islands of Hawaii to the technology that powers our home LANs and data centers. Maybe something else would have come along to do what Ethernet does today, but personally I’m quite happy with how things worked out. I remember the first LAN that got put in place at my house during the late 90s as a kid, first to allow my younger brother and I to share files (i.e. LAN gaming), then later to share the cable internet connection. It allowed me to get up to speed with this world of IPX/SPX, TCP/IP and much more network-related stuff, in addition to the joys of LAN parties and being the system administrator for the entire family.

Happy birthday, Ethernet. Here is to another forty innovative, revolutionary years.

Read the whole story
tekvax
8 days ago
reply
Burlington, Ontario
Share this story
Delete

Does Your Phone Need a RAM Drive?

1 Comment

Phones used to be phones. Then we got cordless phones which were part phone and part radio. Then we got cell phones. But with smartphones, we have a phone that is both a radio and a computer. Tiny battery operated computers are typically a bit anemic, but as technology marches forward, those tiny computers grew to the point that they outpace desktop machines from a few years ago. That means more and more phones are incorporating technology we used to reserve for desktop computers and servers. Case in point: Xiaomi now has a smartphone that sports a RAM drive. Is this really necessary?

While people like to say you can never be too rich or too thin, memory can never be too big or too fast. Unfortunately, that’s always been a zero-sum game. Fast memory tends to be lower-density while large capacity memory tends to be slower. The fastest common memory is static RAM, but that requires a lot of area on a chip per bit and also consumes a lot of power. That’s why most computers and devices use dynamic RAM for main storage. Since each bit is little more than a capacitor, the density is good and power requirements are reasonable. The downside? Internally, the memory needs a rewrite when read or periodically before the tiny capacitors discharge.

Although dynamic RAM density is high, flash memory still serves as the “disk drive” for most phones. It is dense, cheap, and — unlike RAM — holds data with no power. The downside is the interface to it is cumbersome and relatively slow despite new standards to improve throughput. There’s virtually no way the type of flash memory used in a typical phone will ever match the access speeds you can get with RAM.

So, are our phones held back by the speed of the flash? Are they calling out for a new paradigm that taps the speed of RAM whenever possible? Let’s unpack this issue.

Yes, But…

Source PXFuelIf your goal is speed then, one answer has always been to make a RAM disk. These were staples in the old days when you had very slow disk drives. Linux often mounts transient data using tmpfs which is effectively a RAM drive. A disk that refers to RAM instead of flash memory (or anything slower) is going to be super fast by comparison to a normal drive.

But does that really matter on these phones? I’m not saying you don’t want your phone to run fast, especially if you are trying to do something like gaming or augmented reality rendering. What I’m saying is this: modern operating systems don’t make such a major distinction between disk and memory. They can load frequently used data from disk in RAM caches or buffers and manage that quite well. So what advantage is there in storing stuff in RAM all the time? If you just copy a flash drive to RAM and then write it back before you shut down, that will certainly improve speed, but you will also waste a lot of time grabbing stuff you never need.

Implementation

According to reports, the DRAM in Xiaomi’s phone can reach up to 44GB/s compared to the flash memory’s 1.7GB/s reads and .75GB/s writes. Those are all theoretical maximums, of course, so take that with a grain of salt, but the ratio should be similar even with real-world measurements.

The argument is that (according to Xiaomi) games could install and load 40% to 60% faster. But this begs the question: How did the game get into RAM to start with? At first we thought the idea was to copy the entire flash to RAM, but that appears to not be the case. Instead, the concept is to load games directly into the RAM drive from the network and then mark them so the user can see that they will disappear on a reboot. The launcher will show a special icon on the home screen to warn you that the game is only temporary.

So it seems like unless your phone is never turned off, you are trading a few seconds of load time for repeatedly installing the game over the network. I don’t think that’s much of a use case. I’d rather have the device intelligently pin data in a cache. In other words, allow a bit on game files that tell them to stay in cache until there is simply no choice but to evict them and you’d have a better system. A comparatively fast load from flash memory once, followed by very fast startups on subsequent executions until the phone powers down. The difference is you won’t have to reinstall every time you reset the phone.

This is Not a Hardware RAM Drive

There have been hardware RAM drives, but that’s really a different animal. Software RAM drives that take part of main memory and make it look like a disk appears to have originated in the UK around 1980 in the form of Silicon Disk System for CP/M and, later MSDOS. Other computers of that era were known to support the technique including Apple, Commodore, and Atari, among others.

In 1984, IBM differentiated PCDOS from MSDOS by adding a RAM disk driver, something Microsoft would duplicate in 1986. However, all of these machines had relatively low amounts of memory and couldn’t spare much for general-purpose buffering. Allowing a human to determine that it made sense to keep a specific set of files in RAM was a better solution back then.

On the other hand, what the Xiaomi design does have one important feature. It is good press. We wouldn’t be talking about this phone if they hadn’t incorporated a RAM drive. I’m just not sure it matters much in real-life use.

We’ve seen RAM disks cache browser files that are not important to store across reboots and that usually works well. It is also a pretty common trick in Linux. Even then, the real advantage isn’t the faster memory as much as removing the need to write cached data to slow disks when it doesn’t need to persist anyway.

Read the whole story
tekvax
8 days ago
reply
I remember these well.
Linux still uses them to mount kernel modules while booting!
Burlington, Ontario
Share this story
Delete

Meryl Streep reads from "Trumpty Dumpty," John Lithgow's new book

1 Share

Yes, John Lithgow has authored and illustrated a book. Actually, Trumpty Dumpty Wanted a Crown: Verses for a Despotic Age ($23) is his second volume of satirical poems aimed at Trump. The first being Dumpty, which was released last October. — Read the rest

Read the whole story
tekvax
8 days ago
reply
Burlington, Ontario
Share this story
Delete

Coca Cola is dropping Tab

1 Comment

Tab. What a beautiful drink. Tab. For beautiful people.

Beautiful people are at a loss for what to drink after learning that Coca-Cola will no longer make their beloved diet soda. The famously awful tasting beverage is being canceled along with others in a purge of underperforming products. — Read the rest

Read the whole story
tekvax
8 days ago
reply
I haven't seen Tab for sale anywhere, for many many years...
Burlington, Ontario
Share this story
Delete
Next Page of Stories