Broadcast Engineer at BellMedia, Computer history buff, compulsive deprecated, disparate hardware hoarder, R/C, robots, arduino, RF, and everything in between.
5424 stories
·
5 followers

Mastering The Tricky Job of Soldering SMA Connectors

1 Share

There’s a satisfaction in watching someone else at work, particularly when they are demonstrating a solution to a soldering problem you have encountered in the past. SMA panel sockets have a particularly tiny solder bucket on their reverse, and since they often need to be soldered onto brass rod as part of microwave antenna construction they present a soldering challenge. [Andrew McNeil] is here to help, with a foolproof method of achieving a joint that is both electrically and mechanically sound.

The best connections to a solder bucket come when the wire connected to it nestles within its circular center. If this doesn’t happen and a blob of solder merely encapsulates both wire and bucket, the mechanical strength of the solder blob alone is not usually sufficient. The brass rod is wider than the bucket, so he takes us through carefully grinding it down to the right diameter for the bucket so it sits in place and can have the solder sweated into the gap. The result is very quick and simple, but has that essential satisfaction we mentioned earlier. It’s a small hack, but if you’ve ever soldered to a too-small RF connector you’ll understand. For more fun and games with RF connectors, take a look at our overview.



Read the whole story
tekvax
1 day ago
reply
Burlington, Ontario
Share this story
Delete

Paul Taylor Opened the Lines of Telecommunication for the Hearing-Impaired

1 Share

These days, nearly everyone communicates through some kind of keyboard, whether they are texting, emailing, or posting on various internet discussion forums. Talking over the phone is almost outmoded at this point. But only a few decades ago, the telephone was king of real-time communication. It was and still is a great invention, but unfortunately the technology left the hearing and speaking-impaired communities on an island of silence.

Paul and an early TDD. Image via Rochester Institute of Technology

Engineer and professor Paul Taylor was born deaf in 1939, long before cochlear implants or the existence of laws that called for testing and early identification of hearing impairment in infants. At the age of three, his mother sent him by train to St. Louis to live at a boarding school called the Central Institute for the Deaf (CID).

Here, he was outfitted with a primitive hearing aid and learned to read lips, speak, and use American sign language. At the time, this was the standard plan for deaf and hearing-impaired children — to attend such a school for a decade or so and graduate with the social and academic tools they needed to succeed in public high schools and universities.

After college, Paul became an engineer and in his free time, a champion for the deaf community. He was a pioneer of Telecommunications Devices for the Deaf, better known as TDD or TTY equipment in the US. Later in life, he helped write legislation that became part of the 1990 Americans with Disabilities Act.

Paul was diagnosed with Alzheimer’s in 2017 and died in January of 2021 at the age of 81. He always believed that the more access a deaf person had to technology, the better their life would be, and spent much of his life trying to use technology to improve the deaf experience.

High school-aged Paul. Image via YouTube

Learning to Speak Without Hearing

Soon after three-year-old Paul started school at CID, he met a little girl named Sally Hewlett who would one day become his wife. Along with their classmates, they spent the next several years learning to speak by holding their hands to the teacher’s face to feel the vibrations of speech, then touching their own faces while mimicking the movement and sound.

Paul’s father died while he was still in school. His mother moved to St. Louis to be with her son so he could still attend, but live at home. She took the opportunity to study at CID and she became an accredited teacher for deaf children. When it was time for high school, Paul and his mother moved to Houston, where she started a school for the deaf, and he enrolled in public school for the first time. Paul had no interpreter, no helper of any kind.

In a 2007 documentary made by the Taylors’ youngest daughter, Paul tells a story about an experience he had in high school. There was a nice looking girl in his class, and he wanted to know more about her, so he asked a different girl who she was. When that girl offered to give Paul the first girl’s telephone number, he stopped in his tracks, realizing at that moment how different he was because he couldn’t use the phone like all the other kids. The experience stuck with him and helped drive his life’s work.

AT&T’s Picturephone as it premiered at the 1964 World’s Fair. Image source: AT&T Archives and History Center via LA Times

Phones for All

After high school, Paul completed his bachelor’s of chemical engineering degree from the Georgia Institute of Technology in 1962 and moved back to St. Louis to earn a master’s degree in operational research at Washington University. In the meantime, Sally, who had gone to high school in St. Louis, earned her bachelor’s degree in home economics and returned to CID to teach physical education, religion, and home economics. When Paul learned that Sally was living in town, he got in touch with her immediately. They started dating and were engaged six months later.

Paul took Sally to the 1964 World’s Fair in Queens, New York for their first anniversary. They marveled at AT&T’s Picturephone and wished the future would arrive sooner so they could easily talk from anywhere by reading each other’s lips. By day, Paul was an engineer at McDonnell Douglas and later, Monsanto. He was a different kind of engineer at home, devising different ways to help raise their three hearing children. After their first child was born, Paul built a system that would blink the lights in the house to let them know the baby was crying.

Paul, Sally, and their son David along with one of the first teletypewriters that was repurposed as a telecommunication device for the deaf. Image via Rochester Institute of Technology

He also did whatever he could to help the deaf community by volunteering his time. The phone problem still bothered him greatly. When he noticed an old Western Union teletype machine from WWII just sitting around collecting dust, he got the idea to turn it into a new kind of communication tool.

Around the same time, a deaf physicist named Robert Weitbrecht was developing an acoustic coupler that would transmit teletype signals over consumer phone lines. Paul got Weitbrecht to send him one and created one of the first telecommunication devices for the deaf (TDD). With one of these devices on each end of the phone line, anything typed on one would be printed out on the other. Paul worked with Western Union to get these old teletypewriters into the hands of hearing and speaking-impaired people, and convinced AT&T to create a relay service to use them as well.

Paul started a non-profit organization to distribute these early TDDs to other deaf St. Louisans. He asked a local telephone wake-up call service to help out, and built one of the first telephone relay systems in the process. Although both parties needed a TDD to be able to communicate, this was a big step in the right direction.

Paul also did a lot of work to keep the machines humming for the people who depended on them. Teletypewriter manuals were helpful, but were awfully dense reading material for the layman. Paul organized a week-long workshop to create a picture-rich manual called Teletypewriters Made Easy to help people repair and maintain three common models of teletypewriter. Paul discusses his personal history with TDD development in the video below.

A Loud Voice for the Deaf Community

In 1975, Paul was offered a position at the National Institute for the Deaf at Rochester Institute of Technology, so the Taylor family moved to upstate New York. Paul became a computer technology professor and chairman of the Engineering Support team. He stayed there for the next 30 years before retiring.

A more modern TTY. Image via YouTube

During this time, he advocated for a national, operator-assisted telephone relay service through which deaf and hearing impaired people could communicate with anyone, whether or not the other person had a TDD.

The idea was that the deaf person would use a TTY to call an operator, who would get the other person on the line and relay messages back and forth between the two parties by typing out what the voice caller said and reading aloud what the TDD user typed in response. Paul took a two-year leave of absence from teaching and worked directly with the FCC to write regulations that became part of the guidelines prescribed in the 1990 Americans with Disabilities Act (ADA).

Learning How to Hear

At the age of 65, Paul and Sally decided to get cochlear implants after a lifetime of silence. Their youngest daughter Irene made a documentary about their experience called Hear and Now, which is embedded below. It’s an interesting firsthand look into the process, which is not the instant cure that the internet may have led you to believe. The implant can’t be activated until the swelling from surgery goes down, which takes about a month. And it can take years for the brain to get used the new sensory information and begin to distinguish relevant sounds from background noise.

Although TTY/TDDs are falling out of use thanks to the videophone-enabled text messaging devices in most people’s pockets, their influence on communication lives on in shorthand now used in our everyday messages — OIC, PLS, and THX are older than you might think.

Thanks for the tip, [Zoobab].



Read the whole story
tekvax
1 day ago
reply
Burlington, Ontario
Share this story
Delete

Real-Time OS Basics: Picking The Right RTOS When You Need One

1 Share

When do you need to use a real-time operating system (RTOS) for an embedded project? What does it bring to the table, and what are the costs? Fortunately there are strict technical definitions, which can also help one figure out whether an RTOS is the right choice for a project.

The “real-time” part of the name namely covers the basic premise of an RTOS: the guarantee that certain types of operations will complete within a predefined, deterministic time span. Within “real time” we find distinct categories: hard, firm, and soft real-time, with increasingly less severe penalties for missing the deadline. As an example of a hard real-time scenario, imagine a system where the embedded controller has to respond to incoming sensor data within a specific timespan. If the consequence of missing such a deadline will break downstream components of the system, figuratively or literally, the deadline is hard.

In comparison soft real-time would be the kind of operation where it would be great if the controller responded within this timespan, but if it takes a bit longer, it would be totally fine, too. Some operating systems are capable of hard real-time, whereas others are not. This is mostly a factor of their fundamental design, especially the scheduler.

In this article we’ll take a look at a variety of operating systems, to see where they fit into these definitions, and when you’d want to use them in a project.

A Matter of Scale

Different embedded OSes address different types of systems, and have different feature sets. The most minimalistic of popular RTOSes is probably FreeRTOS, which provides a scheduler and with it multi-threading primitives including threads, mutexes, semaphores, and thread-safe heap allocation methods. Depending on the project’s needs, you can pick from a number of dynamic allocation methods, as well as only allow static allocation.

On the other end of the scale we find RTOSes such as VxWorks, QNX and Linux with real-time scheduler patches applied. These are generally POSIX-certified or compatible operating systems, which offer the convenience of developing for a platform that’s highly compatible with regular desktop platforms, while offering some degree of real-time performance guarantee, courtesy of their scheduling model.

Again, an RTOS is only and RTOS if the scheduler comes with a guarantee for a certain level of determinism when switching tasks.

Real-Time: Defining ‘Immediately’

Even outside the realm of operating systems, real-time performance of processors can differ significantly. This becomes especially apparent when looking at microcontrollers and the number of cycles required for an interrupt to be processed. For the popular Cortex-M MCUs, for example, the interrupt latency is given as ranging from 12 cycles (M3, M4, M7) to 23+ (M1), best case. Divide by the processor speed, and you’ve got a quarter microsecond or so.

In comparison, when we look at Microchip’s 8051 range of MCUs, we can see in the ‘Atmel 8051 Microcontrollers Hardware Manual’ in section 2.16.3 (‘Response Time’) that depending on the interrupt-configuration, the interrupt latency can be anywhere from 3 to 8 cycles. On x86 platforms the story is more complicated again, due to the somewhat convoluted nature of x86 IRQs. Again, some fraction of a microsecond.

This latency places an absolute bound on the best real-time performance that an RTOS can accomplish, though due to the overhead from running a scheduler, an RTOS doesn’t come close to this bound. This is why, for absolute best-of-class real-time performance, a deterministic single polling loop approach with fast interrupt handler routines for incoming events is by far the most deterministic.

If the interrupt, or other context switch, costs cycles, running the underlying processor faster can also obviously reduce latency, but comes with other trade-offs, not the least of which is the higher power usage and increased cooling requirements.

Adding Some Cool Threads

As FreeRTOS demonstrates, the primary point of adding an OS is to add multi-tasking (and multi-threading) support. This means a scheduler module that can use some kind of scheduling mechanism to chop the processor time into ‘slices’ in which different tasks, or threads can be active. While the easiest multi-tasking scheduler is a cooperative-style one, where each thread voluntarily yields to let other threads do their thing, this has the distinct  disadvantage of each thread having the power to ruin everything for other threads.

Most real-time OSes instead use a preemptive scheduler. This means that application threads have no control over when they get to run or for how long. Instead, an interrupt routine triggers the scheduler to choose the next thread for execution, taking care to differentiate between which tasks are preemptable and which are not. So-called kernel routines for example might be marked as non-preemptable, as interrupting them may cause system instability or corruption.

Although both Windows and Linux, in their usual configuration, use a preemptive scheduler, these schedulers are not considered suitable for real-time performance, as they are tuned to prioritize for foreground tasks. User-facing tasks, such as a graphical user interface, will keep operating smoothly even if background tasks may face a shortage of CPU cycles. This is what makes some real-time tasks on desktop OSes such a chore, requiring various workarounds.

A good demonstration of the difference with a real-time focused preemptive scheduler can be found in the x86 version of the QNX RTOS. While this runs fine on an x86 desktop system, the GUI will begin to hang and get sluggish when background tasks are performed, as the scheduler will not give the foreground tasks (the GUI) special treatment. The goal of the Linux kernel’s real-time patch also changes the default behavior of the scheduler to put the handling of interrupts first and foremost, while otherwise not distinguishing between individual tasks unless configured to do so by explicitly setting thread priorities.

RTOS or Not, That’s the Question

At this point it should be clear what is meant by “real-time” and you may have some idea of whether a project would benefit from an RTOS, a plain OS, or an interrupt-driven ‘superloop” approach. There’s no one-size-fits-all answer here, but in general one seeks to strike a balance between the real-time performance required and the available time and budget. Or in the case of a hobby project in far how one can be bothered to optimize it.

The first thing to consider is whether there are any hard deadlines in the project. Imagine you have a few sensors attached to a board that need to be polled exactly at the same intervals and the result written to an SD card. If any kind of jitter in between readings of more than a few dozen cycles would render the results useless, you have a hard real-time requirement of that many cycles.

We know that the underlying hardware (MCU, SoC, etc.) has either a fixed or worst-case interrupt latency. This determines the best-case scenario. In the case of an interrupt-driven single loop approach, we can likely easily meet these requirements, as we can sum up the worst-case interrupt latency, the cycle cost of our interrupt routine (ISR) and the worst-case time it would take to process and write the data to the SD card. This would be highly deterministic.

In the case of our sensors-and-SD-card example, the RTOS version would likely add overhead compared to the single loop version, on account of the overhead from its scheduler. But then imagine that writing to the SD card took a lot of time, and that you wanted to handle infrequent user input as well.

With an RTOS, because the samples need to be taken as close together as possible, you’d want to make this task non-preemptable, and give it a hard scheduling deadline.  The tasks of writing to the SD card and any user input, with a lower priority. If the user has typed a lot, the RTOS might swap back to handling the data collection in the middle of processing strings, for instance, to make a timing deadline. You, the programmer, don’t have to worry about it.

In short: an RTOS offers deterministic scheduling, while an interrupt-driven single loop eliminates the need for scheduling altogether, aside from making sure that your superloop turns around frequently enough.

Creature Comforts

When one pulls away the curtain, it’s obvious that to the processor hardware, concepts like ‘threads’ and thread-synchronization mechanisms such as mutexes and semaphores are merely software concepts that are implemented using hardware features. Deep inside we all know that a single-core MCU isn’t really running all tasks simultaneously when a scheduler performs its multi-tasking duty.

Yet an RTOS – even a minimalistic one like FreeRTOS – allows us to use those software concepts on a platform when we simultaneously need to stay as close to the hardware as possible for performance reasons. Here we strike the balance between performance and convenience, with FreeRTOS leaving us to our own devices when it comes to interacting with the rest of the system. Other RTOSes, like NuttX, QNX and VxWorks offer a full-blown POSIX-compatible environment that supports at least a subset of standard Linux code.

While it’s easy to think of FreeRTOS for example as an RTOS that one would stuff on an MCU, it runs just as well on large SoCs. Similarly, ChibiOS/RT happily runs on anything from an 8-bit AVR MCU to a beefy x86 system. Key here is finding the right balance between the project requirements and what one could call creature comforts that make developing for the target system easier.

For RTOSes that also add a hardware abstraction layer (e.g. ChibiOS, QNX, RT Linux, etc.), the HAL part makes porting between different target systems easier, which can also be considered an argument in its favor. In the end, however, whether to go single loop, simple RTOS, complicated RTOS or ‘just an OS’ is a decision that’s ultimately dependent on the context of the project.



Read the whole story
tekvax
1 day ago
reply
Burlington, Ontario
Share this story
Delete

Kyle MacLachlan is on Tik Tok and he lip-synced his famous Agent Cooper scene

1 Comment
@kyle_maclachlan

Diane, it's 11:30 am, February 24th. Entering the town of TwinPeaks…and TikTok.Tag me in your duets today and I'll share some of my favorites 🌲☕️🚗

♬ original sound – Kyle MacLachlan

Kyle MacLachlan, who played Agent Cooper on Twin Peaks, hopped on his exercise bike to lip-sync the iconic scene in which he drives into the town of Twin Peaks while dictating to his assistant, Diane. — Read the rest

Read the whole story
tekvax
1 day ago
reply
damn fine cup of coffee...
Burlington, Ontario
Share this story
Delete

Watch Hamilton: An Animal Crossing Musical

1 Comment

I recently started playing Animal Crossing again after a Fortnite-focused break. While searching YouTube for tips on how to deal with capitalist slumlord Tom Nook, I came across this jaw-dropping gem of a video and did not move for an hour and thirteen minutes. — Read the rest

Read the whole story
tekvax
3 days ago
reply
Hamilton is at an Animal Crossing!
Burlington, Ontario
Share this story
Delete

Mars landing engineer is Jaime Escalante's former student

1 Share
Man wearing glasses at chalkboard

Decades before leading a team of more than 100 engineers responsible for The Perseverance rover that landed on Mars this week, East L.A. native Sergio Valdez was a student of none other than Jaime Escalante at Garfield High. In case you're too young to remember or so old you forgot, Escalante was such a bad-ass math teacher that they made a movie about him in the '80s called Stand and Deliver. — Read the rest

Read the whole story
tekvax
3 days ago
reply
Burlington, Ontario
Share this story
Delete
Next Page of Stories