Broadcast Engineer at BellMedia, Computer history buff, compulsive deprecated, disparate hardware hoarder, R/C, robots, arduino, RF, and everything in between.
5164 stories
·
5 followers

'Super 8: An Illustrated History' Will Scratch That Analog Itch

1 Share

Both rookie filmmakers and analog die-hards alike will find something to love in Danny Plotnick’s new coffee table book ‘Super 8: An Illustrated History.’ Newcomers will whisper a quiet "thank you" before tucking in their iPhones tonight after they're introduced to the laborious process that their filmmaking ancestors went through, from buying expensive film stock to processing by hand. Experience the dizzying highs and treacherous lows as the author recounts his own decades-long love affair with Super 8 filmmaking (see: Skate Witches). The glorious photos of vintage cameras and projectors that adorn this book will have even the most casual gearhead drooling and interviews with underground filmmakers who cut their teeth on Super 8 including Richard Linklater, Bruce LaBruce, and GB Jones will offer insights into the passion that drove no-budget artists in the pre-digital age.

Super 8: An Illustrated History.

 

Eumig Mark 610 D - Super 8 projector

Braun SB-1 Viewer

 

Read the whole story
tekvax
17 hours ago
reply
Burlington, Ontario
Share this story
Delete

Linux Fu: Alternative Shells

1 Share

On Unix — the progenitor of Linux — there was /bin/sh. It was simple, by comparison to today’s shells, but it allowed you to enter commands and — most importantly — execute lists of commands. In fact, it was a simple programming language that could make decisions, loop, and do other things to allow you to write scripts that were more than just a list of programs to run. However, it wasn’t always the easiest thing to use, so in true Unix fashion, people started writing new shells. In this post, I want to point out a few shells other than the ubiquitous bash, which is one of the successors to the old sh program.

Since the 7th Edition of Unix, sh was actually the Bourne shell, named after its author, Stephen Bourne. It replaced the older Thompson shell written in 1971. That shell had some resemblance to a modern shell, but wasn’t really set up for scripting. It did have the standard syntax for redirection and piping, though. The PWB shell was also an early contender to replace Thompson, but all of those shells have pretty much disappeared.

You probably use bash and, honestly, you’ll probably continue to use bash after reading this post. But there are a few alternatives and for some people, they are worth considering. Also, there are a few special-purpose shells you may very well encounter even if your primary shell is bash.

Two Philosophies

There are really two ways to go when creating a new shell. Unix and Linux custom, as well as several standards, assume you will have /bin/sh available to execute scripts. Of course, a shell script can also ask for a specific interpreter using the #! syntax. That’s how you can have scripts written in things like awk.

That leads to two different approaches. You can create a new shell that is compatible with sh, but extended. That’s the approach things like the Korn shell (ksh) or the Bourne Again shell (bash) take. On the other hand, you can completely replace the shell with something new like the C shell (practically, now, tcsh which has pretty much replaced C shell). These shells don’t look anything like the classic shell. Of course, neither does bash if you look at the details, but superficially, most things you can do with sh will work with bash, too, but bash adds a lot more.

Korn Shell

David Korn at AT&T wrote a shell that bears his name. If you only know bash, you’d be a lot more comfortable with ksh than with sh. It is a compatible shell, but offers things we take for granted today. For example, ksh provided command line editing, coroutines, and new control structures like select. It also borrowed ideas from the C shell such as history, functions, and aliases.

The only problem with ksh is that AT&T held it pretty close to its chest for years. So even though not many people use ksh today, the ideas in ksh spread to other shells and are widely used today. There is a public domain version, pdksh, if you want to try it out.

Ash and Dash

The Almquist shell, or ash, is basically a clone of the Bourne shell written by Kenneth Almquist. It doesn’t add a lot of features, but it is very small and fast. This makes it a popular choice for tiny Linux distributions like rescue disks or embedded systems. In 1997 Herbet Xu ported ash for use with Debian and it became Dash — the Debian Almquist shell. If you use any of the Debian-derived distributions, you’ll probably find that /bin/sh is a link to dash.

Fish

Fish isn’t named after anyone — not even a TV detective. It stands for Friendly Interactive Shell. Unlike ksh, ash, dash, and bash, fish doesn’t try to be compatible with the old classic shell programs. Instead, it tries to be very user friendly. For example, it automatically suggests things as you type.

A big feature of fish is that it doesn’t implicitly create subshells. Consider this (contrived) example:

SUCCESS=0; cat /etc/passwd | if grep ^kirk: ; then SUCCESS=1; fi

Change “kirk” out for a user in your passwd file and try this under bash. Then print out $SUCCESS and you will see it stays zero no matter what. The reason is the part of the command to the right of the pipe character spawned a new shell. You set the variable in that shell, which then exits and the shell you started in still has SUCCESS as zero. With fish, this doesn’t happen.

If you were setting up Linux for a new user, fish might be a good choice for their default shell. For most power users, though, they’ll want to stick to something more conventional. If you do want to learn more, check out the video, below.

Z Shell

The Z shell is newer, dating from 1990. This may be the most popular shell outside of bash on this list. One of the biggest reasons is that it has a plugin architecture that allows for lots of customization including themes and very sophisticated command line completion. You can edit multiline commands easily. Some plugins even provide things like an FTP client.

Many of the things you get out of the box with zsh can be added to bash, but it would be a lot of work. If you start zsh as sh, it pretends to be sh — a lot of advanced shells do that.

Because of the plugin architecture, there’s something like an app store for zsh called “Oh My ZSH.” If you browse through it, you’ll probably be tempted to try zsh. If you ask a seasoned Linux user — at least in the year 2020 — what shell they use, and they don’t answer bash, they’ll probably answer zsh. If you have an hour and a half to kill, you might enjoy the video below.

And There’s More

There are probably more shells, but ultimately it is a matter of personal preference. One we are watching is Nu shell. It has some interesting ideas about extending the idea of a pipe and stream in Linux. I haven’t tried it yet, but as it becomes more stable, I might. If you are an emacs fan, there is eshell — something I’ll talk about in a future post.

Wikipedia has a good comparison matrix of shells if you are curious. Personally? I use bash, but I am always tempted to learn zsh better. I’ve used all of these at some point except fish. How about you? Leave a comment with your favorite shell, especially if it isn’t on this list.

 

Read the whole story
tekvax
6 days ago
reply
Burlington, Ontario
Share this story
Delete

Stealing RAM For A Microcontroller From A TFT Display

2 Shares

PC users with long memories will recall the days when the one-megabyte barrier was  a significant problem, and the various tricks of extended and expanded memory used to mitigate it. One of them was to install a driver that mapped surplus graphics card memory as system memory when the display was in DOS text mode, and it was this that was brought to mind when we read about [Frank D]’s microcontroller implementation of Conway’s Game Of Life.

The components were those he had to hand; an STM32F030F4P6 and an RM68130 176 × 220 TFT board. The STM is not the most powerful of chips, with only 16 kB of Flash and 4 kB of RAM. The display has enough on-board memory to support 18 bits of colour information, but when it is running in eight-colour mode it only uses three of them. The 15 bits that remain are thus available to be used for other purposes, and though the arcane format in which they are read required some understanding they could be used to provide a very useful extra 38720 bytes of RAM for the microcontroller just as once happened with those DOS PC graphics cards of old. Interestingly, the same technique should work with other similar displays.

Though this isn’t a new technique by any means we can’t recall seeing it used in a microcontroller project such as this one before. We’ve brought you many Games of Life though, as well as marking John Conway’s passing earlier this year.

Read the whole story
tekvax
6 days ago
reply
Burlington, Ontario
Share this story
Delete

This Week in Security: DNS DDOS, Revenge of the 15 Year Old Bug, and More

1 Share

Another DDOS amplification technique has just recently been disclosed, NXNSAttack (technical paper here) that could be used against DNS servers.

We’ve covered amplification attacks before. The short explanation is that some UDP services, like DNS, can be abused to get more mileage out of a DDoS attack. The attacking machined send messages like this: “Hello Google DNS, This is the Hackaday server. Can you send me a really big DNS response packet?” If the DNS response is bigger than the request, then the overall attack is bigger as a result. The measure of effectiveness is the amplification factor. For every byte of DDoS sent by attacking machines, how much many bytes are actually sent to the victim machine? Mirai, for example, had an amplification factor of something around 2.6.

NXNSAttack has a theoretical per-byte amplification factor of 163. That’s not a missed decimal point, this has the potential to be quite the nasty problem.

To pull off the attack, the baddie needs to control a domain name server that’s authoritative for its own domain: evil.com. An innocent DNS is then asked for the IP address of a random machine in the evil subdomain. Since the innocent DNS has never seen the name before, it asks the root .com server for the IP address of the evil DNS server (ns1.evil.com) and then goes to ask there.

Normally, the evil DNS server would respond with the IP address of the machine in its own domain, and the story would end happily. But here the evil nameserver responds with the addresses of many “nameservers” in the target domain, all invented simply to generate traffic, and tells the innocent DNS server to go ask them:nslgb7vX.sucker.com and nseHOiF.sucker.com and so on. Here comes the amplification.

Many DNS resolvers will look up the IP address for each and every “nameserver” it receives, and will do this in parallel, because under normal circumstances these IPs are cached and they can sweep up an entire domain’s set of DNS servers in one go and never have to ask again. So the innocent DNS asks the root .comserver for the IP address of target’s authoritative server ns1.sucker.com, where it’s going to lookup all of the IP addresses for the fake nameservers.

But since all of the “nameserver” names are random and fake, the innocent resolver is fooled into hammering ns1.sucker.com with requests for the IP address of each of these fake nameservers. In practice, this multiplies the DNS requests a few-fold: 10-20x is plausible. The full attack uses two stages of redirections from the evil nameserver to essentially square the number of requests, which is how they end up with a factor of 163 in practice. In this scenario, the traffic from just a few malicious machines can quickly overwhelm the victim’s infrastructure.

NXNSAttack was privately disclosed to a handful of DNS vendors, so limited mitigations are already available. Running a recursive DNS server was already a difficult task, but now there is one more pitfall to watch out for.

15-year-old Vulnerability Finally Exploited

Some vulnerabilities are obviously exploitable, and get fixed ASAP. In other cases, code may technically be vulnerable, but in a way that seems extremely unlikely to ever be practically exploitable. It’s easy to dismiss these as non-issues, and never do the work to fix them. Qmail contained a trio of flaws for at least 15 years, and serves as a good example of why it’s important to fix “unexploitable” issues.

2005 was the era when x86-64 machines were first becoming available to the greater public. It shouldn’t be a great surprise that certain programming assumptions are safe to make on the 32-bit platform, and are no longer valid on a 64-bit machine. Qmail was written with the assumption that an array would never be allocated for more than 4 GB of memory — safe in the 32-bit era. CVE-2005-1513,1514, and 1515 were reported and dismissed, as reaching the 4 GB limit was considered impossible in any default, or sane, deployment.

Fast-forward to May 19th, 2020 and a way to exploit these bugs was finally found. The vulnerable code is also used in the qmail-local service, which by default isn’t limited to a set memory amount. A specially crafted 4 GB email can trigger the integer overflow, and lead to remote code execution. There are plenty of juicy details in the full write-up, so check it out for more.

300,000 Vulnerable QNAP Devices

QNAP makes a NAS device that’s rather popular with prosumer users. Going above and beyond simple file storage, these QNAP devices have features like an integrated photo organizer, music player, etc. [Henry Huang] discovered three separate vulnerabilities that can be chained together to gain a root webshell. So first off, any QNAP users out there, go check for updates!

Now that you’re up to date, let’s dig through the exploit chain. First, a remote API designed for interacting with sample albums is accessible without authentication. An attacker can create a sample album, and is returned an Album ID. The information from the created sample ID is used to craft a request, which can read any file on the file system, though unsanitized file names containing “../../” style characters. This is used to read an application login token.

That token is then used to log in, and another pair of vulnerabilities allows an attacker to drop PHP code in the web folder. All that’s left is to access the new page in a browser, and the injected PHP code is run. As the webserver on these devices runs as root, injecting a remote shell means full device compromise.

The Million Dollar Challenge?

The Houseparty social network, run by Epic Games put out a challenge on Twitter: Provide proof of a smear campaign about security problems on Houseparty, and they would pay a cool million dollar bounty.

That offer caught the attention of [Zach Edwards], who started looking into the security of Houseparty. What he found wasn’t pretty. The login page doesn’t use any Content Security Policy (CSP). Among other things, this means it could be embedded in a phishing page.

[Zach] kept digging, and discovered a number of “thehousepartyapp.com” subdomains that have been hijacked. It appears that there is a sophisticated credit card fraud campaign using these subdomains. The entire story is complex, and there is probably even more to the story. Unfortunately, it appears that Epic Games isn’t taking the discovery as seriously as one might hope.

Odds-n-Ends

The TrendMicro Rootkit Remover tool installs the TrendMicro Common Module driver. [Bill Demirkapi], who is only 18, decided to take a look, and discovered a few oddities. Among them, this driver detects when it’s being inspected by a tool like Driver Verifier, and cheats in order to pass the WHQL test. To put a cherry on top of his research, [Bill] describes a rootkit that hijacks the TrendMicro driver.

Supercomputers are apparently the next frontier in malware. Multiple machines have been compromised by what appears to be a rather sophisticated campaign — one that intentionally tries very hard to clean evidence of its activities. It’s unclear what exactly the purpose of the attacks are, but it’s a reasonable conclusion that as expensive as modern supercomputers are, the data they produce could potentially also be of great value, in certain situations. For a special bonus, the article calls out this situation’s resemblance to “The Cuckoo’s Egg” and everyone’s favorite, Clifford Stoll. (I suggest a Klein Bottle drinking game for every mention of Stoll, who seems to be everyone’s favorite guy.)

And finally, while a port scan isn’t a crime, it’s a bit rude for a website to run one from within your browser, just because you visited. Ebay is the given example, and interestingly, the scan is only run when the site is accessed from a Windows machine. It’s suggested that the port scanning is intended to discover visitors that are compromised.

Read the whole story
tekvax
6 days ago
reply
Burlington, Ontario
Share this story
Delete

Microsoft Releases the Source Code You Wanted Almost 30 Years Ago

1 Comment

In the late 1970s and early 1980s, if you had a personal computer there was a fair chance it either booted into some version of Microsoft Basic or you could load and run Basic. There were other versions, of course, especially for very small computers, but the gold standard for home computer Basic was Microsoft’s version, known then as GW-Basic. Now you can get the once-coveted Microsoft Basic source code for the 8086/8088 directly from Microsoft in the state you would have found it in 1983. They put up a read only GW-BASIC repository, presumably to stop a flood of feature requests for GPU acceleration.

You might wonder why they would do this? It is certainly educational, especially if you are interested in assembly language. For historical reasons, you might want to get a copy you could modify, too, for your latest retrocomputer project.

There are a few tidbits of interest. Some of the source is marked that it was translated. Apparently, Microsoft had a master implementation for some processor — real or imagined — and could translate from that code to 8088, Z-80, 6502, or any other processor they wanted to target.

From what we understand, GW-Basic was identical to IBM’s BASICA, but didn’t require certain IBM PC ROMs to operate. Of course, BASICA, itself, came from MBASIC, Microsoft’s CP/M language that originated with Altair Basic. A long lineage that influenced personal computers for many years. On a side note, there’s debate on what the GW stands for. Gee-Whiz is a popular vote, but it could stand for ‘Gates, William’, Greg Whitten (an early Microsoft employee), or Gates-Whitten. The source code doesn’t appear to answer that question.

We did enjoy the 1975 copyright message, though:

ORIGINALLY WRITTEN ON THE PDP-10 FROM
FEBRUARY 9 TO APRIL 9 1975

BILL GATES WROTE A LOT OF STUFF.
PAUL ALLEN WROTE A LOT OF OTHER STUFF AND FAST CODE.
MONTE DAVIDOFF WROTE THE MATH PACKAGE (F4I.MAC).

It wasn’t long ago that Microsoft released some old versions of MSDOS. If you have the urge to write some Basic, you might pass on GW-Basic and try QB64, instead.

GW-Basic Disk and Manual photo by [Palatinatian] CC-SA-4.0.

Read the whole story
tekvax
6 days ago
reply
Microsoft releases 30 year old basic from the 80s.
Burlington, Ontario
Share this story
Delete

A 4-bit Random Number Generator

1 Share

Randomness is a pursuit in a similar vein to metrology or time and frequency, in that inordinate quantities of effort can be expended in pursuit of its purest form. The Holy Grail is a source of completely unpredictable randomness, and the search for entropy so pure has taken experimenters into the sampling of lava lamps, noise sources, unpredictable timings of user actions in computer systems, and even into sampling radioactive decay. It’s a field that need not be expensive or difficult to work in, as [Henk Mulder] shows us with his 4-bit analogue random number generator.

One of the simplest circuits for generating random analogue noise involves a reverse biased diode in either Zener or avalanche breakdown, and it is a variation on this that he’s using. A reverse biased emitter junction of a transistor produces noise which is amplified by another transistor and then converted to a digital on-off stream of ones and zeroes by a third. Instead of a shift register to create his four bits he’s using four identical circuits, with no clock their outputs randomly change state at will.

A large part of his post is an examination of randomness and what makes a random source. He finds this source to be flawed because it has a bias towards logic one in its output, but we wonder whether the culprit might be the two-transistor circuit and its biasing rather than the noise itself. It also produces a sampling frequency of about 100 kbps, which is a little slow when sampling with he Teensy he’s using.

An understanding of random number generation is both a fascinating and important skill to have. We’ve featured so many RNGs over the years, here’s one powered by memes, and another by a fish tank.

Read the whole story
tekvax
6 days ago
reply
Burlington, Ontario
Share this story
Delete
Next Page of Stories