so this article strikes a chord...

but it feels like the author has got the diagnosis right, but landed on a prescription that's 180° wrong:

The entry barrier to programming needs to be high!

no, absolutely not! and if the author looks hard, this is obvious; the entry barrier to programming was higher back then only in terms of obtaining access. in every other sense, it was lower; even the largest computers were so much simpler that one person could grok them almost overnight, operating systems were laughably simple compared with today's, textual interfaces are almost laughably easy to code compared with modern GUIs, network security was just not someone anything cared about...

in contrast, there's a revealing quote from a reply on Hacker News:

At a minimum you need node, npm, webpack, babel, an spa framework, a frontend router, a css transpiler, a css framework, a test runner, a testing functions library, and a bunch of smaller things, and that's just what is "needed" to build a static website with a bit of interaction.

that's a lot of complexity to get on top of! it means that the entry barrier to programming is actually much higher than it was - to the point where people just don't have room in their heads for the high level stuff of web programming and a model of what the computers are actually doing with that stuff.

and yes, a static website with a couple of CGI pages is simpler - as long as you're OK with making every security mistake of the last three decades all over again. because network security is hard, and that's part of what drives the adoption of frameworks.

the other part is kind of historical now - it's the mess that browsers were in between about 1995 and 2010, where nothing worked properly from one browser to another and the most widely used browser was hideously broken. a lot of frameworks evolved to hide the complexity of working out what CSS to present to which browser to get a uniform result.

in a way this complexity still exists - it's just, now it's the gulf between finger-steered mobile and mouse-driven desktop browsers.

(somewhere along the way that changed into forcing browsers to present things as though they were spreads in a Saturday magazine, which doesn't exactly help matters - but blame management and their insistence on hiring graphics designers for that. ;-)

the article continues...

Programming is engineering; it's not something where you throw stuff at the wall and see what sticks

trouble is, it's not just engineering. there's a lot of art involved too. that's where things get problematic...

if it were just engineering, then simple solutions would obviously be the best. you don't want an innovative bridge that prioritises form first! and that's the thing: the barrier to designing a bridge is not high. people have been doing it for millennia! the barrier to designing a bridge that won't collapse, ever is high, but only in the sense of "you have to demonstrate an understanding of why innovative bridges are a bad idea before we let you".

but of course, everyone knows how to use a bridge. it doesn't require thinking about. (well, the "three ropes across a ravine" style of bridge does, i guess - and that's where 1970s/early-80s computing was, frankly.)

but coding isn't about that. coding is the head-on collision of engineering, architecture and anything-goes graphic design - and the designers are leading things, which is probably the natural consequence of what the web has evolved into. and that, i suspect, is why things have become so complex.

hmm... i think this might have ended up in a different place from where i started.

the point i was initially intending to make is that the barrier to entry into computing has become unusably high and needs to be reduced again, by fundamentally rethinking how we can achieve the desirable aims without the colossal amounts of cruft that we've seen evolve around delivering them.

but i think where i've ended up is that the barrier of entry has actually become unscalable for people of an engineering mindset, because it's not for us any more. the web people couldn't / wouldn't wait for us to work out how to build proper, thousand-year viaducts - so they built what they needed on a foundation of three-ropes-across-a-ravine, and have been adding stuff to shore that up ever since. and they can't back out now, because they've been working on this stuff for three decades and there's too much of it now for them to do anything but keep piling on.

...buuuut what do i know? i burned out hard nearly two decades ago, and looking at what i'd have to learn in order to get back into the game, it just looks utterly insurmountable. not to mention that i'd need to build stuff to practice, and i don't have the first idea of what to build! so basically i'm stuck at a "give me a computer i can know inside out" level, because anything else just makes me dizzy.

hence my attraction to things like Forth and Lisp. they look tractable. i could rig up a simple Forth in a few KB of code, and extend that gradually up into a complete system... and it looks less daunting to do that than it does to learn how to make a modern Web app.

i suspect the same impulse is driving things like uxn and retro and ForthBox. we have lost tractability; the only people who can get to grips with modern Web programming are folk who don't need tractability - who are able to just use the bits they need and not worry about what's under the hood.

@millihertz I think all software development is being done by very lean teams on short term contracts, using flawed frameworks they barely understand and which only work 80% of the time, the other 20% is "undefined". I'm seeing this with apps used for work, the supermarket scan as you shop devices, EPOS terminals in the village shop and the Visa payment network. whatever systems they use at the petrol station, everything is flakier than it was in 1980s...

@millihertz it may also be that amongst all the edgelords, incel/gamer types and disaffected cryptobros, there are as many or more people trying to actively break stuff than to fix it; hence the constant need for increased security everywhere (adding complexity and points of failure..)

@vfrmedia @millihertz The difficulty of supporting security (of this nature) is one half of the reason why I refuse to even come close to implementing a secure environment on the ForthBox (or its predecessor, the Kestrel family of computers).

But the second half is exactly this point here: if I were to implement my own secure-watzit feature, all that does is paint a bullseye on the project.

For me, security through transparency is the preferred solution to this problem. It's true that there is no "security" features in the package deal; but, there's also no nooks or crannies in which malware can hide either. No management engines, no hooks in the operating system that cannot be audited by a normal (motivated) user, etc. If a user can understand the concept of AUTOEXEC.BAT and CONFIG.SYS in MS-DOS, then there's truly no place to hide except the kernel's binary image. (The BIOS is off-limits, because in my designs, you need physical access to the machine in order to re-flash it.)

@millihertz There are a lot of parallel “back to basics” movements. The Forth thing you mentioned. Arduino and ESP* and Raspberry Pi and discrete components. Analogue synths. Classic cars and motorbike tinkering. Some succeed and then start to build complexity themselves, becoming the thing that they railed against. Some remain forever niche. This has always been happening.

@futzle Gemini is another good example. There certainly is demand for that, but alas it's not mainstream 🤷

@millihertz Personally I have managed to keep track even despite not actually doing any work on it, and let me tell you.

All people do is ask at work or at a forum what’s the popular thing (usually React + Gatsby/Next.js on top of an Express.js backend), pick up a tutorial, and just… copy-paste your way out. Literally 0 understanding of what’s going on, pure trial and error. Somehow it “works”.

It’s not that complex at all tho, it’s just all the layering involved, derived from further specialization leading to simpler dev/maintenance of each part for each individual involved.

@millihertz Also, my opinion is that systems should be designed to avoid Turing completeness and be generalized as much as possible. Make it so that mistakes can’t happen by design, basically. Then skill won’t matter at all (fun fact: all humans are error-prone, constantly!)

@millihertz I really like building web applications with Go, Postgres, and pureCSS.

Go is minimal, and has a templating language in the stdlib which makes it easy to build SSR webpages. It can be stand alone with the building webserver, fastcgi, or cgi, I haven’t tried cgi.

There are a few extra Go libs I like. Echo or Gorilla toolkit, and PGX.

Postgres supports BSON, so it can be a document db too.

PureCSS is a minimal amount of CSS. After that it’s adding your own style sheet.

@millihertz I should add, I’m not a professional web dev. This is how I get by when I have to do it.

@jollyrogue i'm not sure i'd agree with the idea of Go as minimal. installing it uses up ¾GB on Void-aarch64-musl, and that's without additional libraries. that's at least two orders of magnitude outside what i'm accustomed to considering "minimal" ;-)

@millihertz There’s a lot of tools in install package. It’s kind of like Python in that the batteries are included.

A couple of other tools that might be useful later…

Delve for debugging.

Air for live reloading.

I haven’t figured out how to get these two to work together.

@millihertz The lack of tractability is what lead me to give up on attempting to do web development. I’ve looked at modern web stuff, and it’s just too much for me to deal with now. I do still develop and maintain a single application with a web-based front end for my employer. It’s decidedly not modern though. No javascript, minimal css, static html, some simple forms, and a cgi backend written in retro.

@crc if only more things worked that way! time was, that was at least a required fallback... but tbh i should've known web development was doomed when i made the mistake of criticising something for not having a non-JS fallback on reddit, way back in 2006 or so, and was loudly and obnoxiously shouted down by all the web devs on the thread... that's the mindset that's been in charge since then

> we have lost tractability
Exactly! Most of technologies behind the modern web, the file formats, were supposed to be human-readable, but look at the source of any modern "web app" especially the javascript part — it's as readable as a disassembled binary. No one needs that anymore, but it's still there just consuming resources.

@millihertz Pulling up the ladder has been a conscious decision in the tech industry.

Yes, there are problems only complex tools can solve, and the complex tools have been favored in the industry.

Simple, use friendly tools have disappeared from the landscape. People have focused on the tech elite, and they kind of forgot computers are tools, and there needs to be accessible levels.

Sometimes a screwdriver is the correct tool; sometimes it’s the electric drill with 100 settings.

@millihertz Masochism, billable hours, protection of the tech elite, and an inability to correctly scope a project are what I normally blame.

@millihertz this article perplexes me. I agree with all the bad things being bad, but it concerns me to use that as grounds for going back to "simpler" tools in simpler times.

The new ways are bad, but the old ways are bad too and it saddens me if we can't dream bigger than that. Programming should be much easier than it is, or ever was, because the engineering task is fitting a tool to a human need, not writing arcane runes on a piece of paper and hoping it conjures a finish product.

@rune @millihertz I don't think anyone is arguing against tools.

What's being argued, I think, is for a toolbox with lots of small, horizontal trays in which you can readily see what is available and what you need/can use, and not a tool well which is hundreds of feet down with a rocket sitting inside pre-built, and you having to say, "I need a pump, and maybe I can salvage this pump out of that rocket, I hope."

The analogy is perhaps a bit of a stretch, but it's not invalid. Today's frameworks literally are pre-finished solutions into which you just plug differences into, and for these tools to work well for your needs, you need not only an intimate knowledge of your problem domain (already a huge ask), but also of the problem domain that framework was originally intended to solve (a nearly impossible ask). Your code exists mainly to rectify the impedance mismatch between those two domains.

Now multiply that times the number of layers of abstraction involved, and you get a kind of intuitive measure of what "bad" here means.

@millihertz I think simpler and barrier of entry is doing a bit too much work here. There are different ways a computer can be simpler. I've written assembly and in a sense it's very simple: move that register to that address, add that register to that other register, move the instruction pointer. But I'd say barrier to entry is much higher in assembly than in python or js though they're much larger, complex languages.

@modulux i think the barrier to entry for assembly language is really low! the problem with assembly language is that the reception room is really long and unusually steep ;-)

Sign in to participate in the conversation
OldBytes Space - Mastodon

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!