Pinned post


I don't know if the oldbytes instance is big on introductions, but if so, here goes!

I am a Seattle area nerd interested in the small internet. Back in the day, I spent a lot of time as a forum moderator, and I feel like social media has become much bigger, louder, and less interesting since then. Perhaps Mastodon does better?

I'm probably not going to post a lot, but if I do, it'll be about my adventures with dawless music production and doing strange things with old technology. During lockdown I have collected lots of new gear and am still working out how best to make tracks with it. and (mod, xm) are my home turf.

If you like to do musical things, and/or want someone to listen to your tracks, please by all means @ me.

Huh, it seems this language was a cousin of Grace Hopper's FLOW-MATIC.

Both developed by the same company, Univac, at about the same time. FLOW-MATIC went on to have considerably more influence, and is well known for having influenced the design of COBOL. Whereas UNICODE sank without a trace, to the point where recycling the name later didn't raise any eyebrows!

Show thread

The word meant something different in 1957.

It seems that UNICODE was a high-level language roughly contemporaneous with FORTRAN. The syntax is a bit more like the later COBOL.

TIL Grace Hopper predicted in 1955.

"There is little doubt that the development of automatic coding will influence the design of computers.... Instructions added only for the convenience of the programmer will be omitted since the computer, rather than the programmer, will write the detailed coding."

psf boosted

The "Gold G" Gateway 2000 486 I picked up at Boatfest. I love that the double-speed CD-ROM has an indicator light to let you know when it's single or double speed.

psf boosted


The 16-bit segments in protected mode are a problem for general purpose computing

I can't help but disagree with this statement. I frequently hear it repeated, but very few can legitimately justify why it's considered true.

Unless you're working with graphics or audio, or perhaps large vectors of floating point data as you might find in an astronomical program, it is rare for any object to exceed 64KB in size (consider: that's a vector of 8192 doubles in C). Using segment registers as base address pointers to these objects is, in my opinion, simply genius.

The PC/GEOS operating system did exactly this (and, in real mode to boot), and was a cornerstone for why it was so efficient. It had preemptive multitasking on 8086 hardware running in real-mode that rivaled the performance of then-cooperative multitasking Windows on an 80286 in protected mode, and frankly embarrassed GEM entirely.

Its "virtual memory files", used as a standard way to record data files, also relied heavily on this, and allowed for graphical applications to work together in a way that we wouldn't see again until OLE 2.0 finally landed in Windows more than 15 years later. Remember, all this on an 8086.

Segmentation is a hindrance only to those who have no imagination on how to apply it. I can't help but laugh audibly every time I see someone complaining about having to do pointer arithmetic to dereference a field in an object because there's not enough registers in the addressing mode used. (Bringing the (r1+r2+offset) addressing mode to RISC-V seems to be a perennial point of discussion on the mailing lists.) IA32's [r1+r2*n] addressing mode is basically how segmentation works. Add segmentation to that (which IA32 does; by default, the DS register is the base pointer for that addressing mode), and you basically get a third register for free.

Moving on to the 80286 and later features, let's now talk about what "protected" in protected mode means. :)

In addition to its addressing ability, you get fine-grained security protections as well. As long as a segment is less than 1MB in size (which it always will be on the 80286), those boundaries have byte granularity, not page granularity. This makes single-address space operating systems viable, and arguably preferable, considering all the overhead that is involved with changing page tables on a process switch. This enables message passing by just swapping pointers. No copying to/from kernel/user space is required. Just create a descriptor, and pass it along. Done. If segmentation had survived (and had been used wisely), it'd squash the monolithic-vs-microkernel debate.

And, let's not forget, segments == capabilities. Maybe not complete OCAPs, but enough to be useful. (Fun fact: the idea for capabilities evolved out of study of how segmentation was implemented in the Burroughs mainframes, back in the days. So, segments and capability security goes hand-in-hand.)

It is relatively easy for a CPU hardware designer to create "gates" (indeed, Intel already did, although they're a bit sluggish because they do too much) that enables you to invoke services on an object. If you don't want to pass on that permission to do so, don't include that gate.

The loss of segmentation in today's computing environment has, I'd wager, kept us stuck rigidly in the 70s-era OS and hardware design landscape.

There's no doubt in my mind that Intel is as much to blame for this as anyone; their implementation of protected mode segmentation in the 80286 was reasonable considering when it was built, but they should have known better to fix the mistakes by the time the 80386 shipped. They didn't, and everyone was so utterly fixated on how C and Unix could be made to work that they focused their attention on making a flat addressing environment instead.

Not that there's anything wrong with a flat addressing environment; it has its uses, to be sure. But I really wish we could still use segmentation. It would make a lot of things so much easier.

psf boosted

Not a brand new article, but I think about it a lot.

Then again, I'm one of the weird people who likes to say that machine code is homoiconic.

psf boosted

Just replying to one of my older posts to retract an incorrect statement:

"There is not a large body of material written about ".

I was absolutely incorrect. There is a LOT written down. You just have to dig for it a bit. It's in books and papers, and personal webpages, and places like the archive. And most of it is pretty old.

I think the other point stands -- Forthlikes have strong structural similarities, and a lot of behavior inevitably falls out of a short set of rules.

Show thread

Today I learned that 1 BASE ! will immediately segfault pforth, and will make gforth throw an exception instead of printing the number. My own implementation gets stuck in an infinite loop instead of printing the number.

It would be interesting to compare some more Forths on this basis (heh). Any implementers want to try your luck?

oldbytes book review: Inside F83 (Dr. C. H. Ting) 

F83 is certainly A Thing. It is a huge, monumental and dauntingly complete system, and Ting's 218 page book strains to contain its features: interpreter, compiler, decompiler, assembler, editor, cooperative multi-tasker and more.

This is the first of Ting's books that took more than an evening to read, and it'd take me much longer than that to understand every detail.

Correspondingly, the file size of F83 is 28K, which is much bigger than cmForth, fig-Forth, or eForth... but still smaller than my beloved Turbo Pascal 2.0 which is 36K and now seems positively monstrous by comparison.

F83 is best compared to Turbo Pascal, Turbo Assembler and Turbo Debugger in a single package, at a file size smaller than just Turbo Pascal. That's simply absurd. I am impressed.

I need to seriously recalibrate my expectations of how good a tiny program can be.

psf boosted


I'm an #AppleII hacker best known for cracking thousands of disks:

Passport cracks a lot of disks automatically:

I also preserve still-protected disks:

Anti-M boots a lot of protected disks:

Pitch Dark, an Infocom text adventure pack:

Total Replay, an arcade pack:

Million Perfect Letters, a shifty word game:

oldbytes book review: Systems Guide to figForth (Dr C. H. Ting) 

107 pages about the classic fig-Forth. I am belatedly realizing that I should have read this before his other books, as fig-Forth is a comfortable "stand-alone" system including interpreter, assembler and editor, rather than a bare listener that sits at the end of a serial link.

Despite the broader subject, this is a shorter book than any of the others. It makes good use of the available space.

Going back in time explains many things that I didn't understand about newer Forths. I learned why they bothered with vocabularies: to reuse short word names in interpreter, assembler and editor without collision. There is also a lot in here about using blocks to store source code and bulk data, which I wish I had known before I started playing with MS-DOS native Forths.

The gem of this book is Ting's short description of : "a high level assembly language with an open instruction set for interactive programming and testing".

oldbytes book review: eForth and Zen (Dr. C. H. Ting) 

Another from

A 110-page walkthrough of 80x86 eForth. eForth is a successor to fig-Forth and was released by Silicon Valley Interest Group in the early 1990s.

Programmers love comparing their favorite system to , don't they? I agree that Forth is small, heterodox and passed along an oral tradition. Beyond that you'd have to ask a Zen practitioner.

eForth is a practical Forth with an emphasis on portability. It has 31 primitive words written in assembler: the rest is in Forth.

At 7 KB, it's hard to call eForth a maximalist Forth, but the design philosophy is clearly less spartan than cmForth. eForth is more complicated; structurally, because it is direct-threaded with an inner interpreter; philosophically, because it assumes you'll write more complicated programs and includes tools to help you debug them. The most infectious is TRY/CATCH exceptions, with hooks everywhere including QUIT.

oldbytes book review: Footsteps in an Empty Valley (Dr. C. H. Ting) 

I read this in memory of the author, who recently passed. His works can be downloaded from

This is a 189 page walk through the Novix NC4016 microprocessor and its cmForth system software. It contains a cmForth annotated by Ting, along with Chuck Moore's original code listing.

The most interesting trick of the is its TIMES instruction (like x86 REP) which, used together with a shift-add or shift-subtract instruction, yields 16-bit MUL/DIV in 16 cycles.

The fits into 30 screens. This includes a peephole optimizer, where a "constant folding"-like trick packs multiple functionalities into one NC4016 instruction where possible. It also includes the metacompiler.

The most interesting feature of cmForth is that it uses a "COMPILER" wordlist instead of an immediate-mode flag. This way, compiling-words to emit each machine instruction can be provided for use in the : compiler.

quote about post-collapse computing 

"We are building a culture which can not survive as trivial an incident as Y2K. Once we lose the billion dollar fabrication plants and once we lose the thousand man programming teams how do we rebuild them? Would be bother to rebuild them? Are computers worth enough to demand the social investment that we put into them. It could be a lot simpler. If it were a lot simpler I would have a lot more confidence that the technology would endure into the indefinite future." -- Chuck Moore

psf boosted

In fact... the more I think about it, the more I think Voyager (with time travel as a running theme, and "barbarians / street gangs" as the initial villain faction... and Kes as a very Trance / Kaylee / Chiana character) was really meant to be set in the far future, post Federation collapse, not the Delta Quadrant!

This would explain *so much* that didn't make sense about Voyager's setup!

* why do they know all alien languages? They're their own!

* why is everything so small-scale? Collapse.

Show thread

parser finite state machine 

Well that's by FAR the most finicky thing I've had to figure out lately.

(Image description: hand-rolled finite state machine for turning S-expressions into cons pairs -- may still contain bugs but I've worked a bunch of cases out on paper and it seems fine?)

psf boosted

selective breeding has left the domesticated laptop computer frail and anemic for the sake of appearence. stop buying unhealthy laptop breeds

16 bit 

I am increasingly coming to appreciate 16 bit cpus.

8 bits is fine for arithmetic, but an 8-bit memory address isn't big enough for anything useful (so you get paging and/or annoying multi-word loads/stores everywhere), and an 8-bit instruction also isn't big enough to wrap up constants (so you get a variable-length instruction encoding with immediate values).

32+ bits is great for a nice big flat address space, tagged pointers, etc., but it's a comical waste of space for instructions (so once again, you get a variable-length instruction encoding).

12-18 bit word size + fixed length instruction encoding is the sweet spot for simplicity, just ask the PDP-8 or the J1 :)

Show older
OldBytes Space - Mastodon

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!