May 24, 2016

Alex Bloor

When Junk Callers get fiesty… (and don’t know the law)

(Update at end 16:32 and 23:08) I had a message passed to me, to call someone back; a Michael Kennedy from “PSI Group”. The name of the company they said they were from seemed familiar, so I did call them … Continue reading

by Alex Bloor at May 24, 2016 01:53 PM

May 20, 2016

Alex Bloor

Part 5: Postmortem of a Kickstarter campaign; Camsformer

Part Five – The best stuff always happens after I leave the party 🙁 Part Four ended with me being given a refund and effectively ending my relationship with the project. Despite no longer having a chance of getting the … Continue reading

by Alex Bloor at May 20, 2016 05:12 PM

May 19, 2016

Alex Bloor

Part 4: Postmortem of a Kickstarter campaign; Camsformer

Part Four…The purchase of my silence (spoiler alert) We left the story at part three on September 6th 2015, at which point I’d lost my shit for the first time, pointed out a whole lot of oddities and concerns and … Continue reading

by Alex Bloor at May 19, 2016 08:00 PM

May 18, 2016

Jonathan McDowell

First steps with the ATtiny45

1 port USB Relay

These days the phrase “embedded” usually means no console (except, if you’re lucky, console on a UART for debugging) and probably busybox for as much of userspace as you can get away with. You possibly have package management from OpenEmbedded or similar, though it might just be a horrible kludged together rootfs if someone hates you. Either way it’s rare for it not to involve some sort of hardware and OS much more advanced than the 8 bit machines I started out programming on.

That is, unless you’re playing with Arduinos or other similar hardware. I’m currently waiting on some ESP8266 dev boards to arrive, but even they’re quite advanced, with wifi and a basic OS framework provided. A long time ago I meant to get around to playing with PICs but never managed to do so. What I realised recently was that I have a ready made USB relay board that is powered by an ATtiny45. First step was to figure out if there were suitable programming pins available, which turned out to be all brought out conveniently to the edge of the board. Next I got out my trusty Bus Pirate, installed avrdude and lo and behold:

$ avrdude -p attiny45 -c buspirate -P /dev/ttyUSB0
Attempting to initiate BusPirate binary mode...
avrdude: Paged flash write enabled.
avrdude: AVR device initialized and ready to accept instructions

Reading | ################################################## | 100% 0.01s

avrdude: Device signature = 0x1e9206 (probably t45)

avrdude: safemode: Fuses OK (E:FF, H:DD, L:E1)

avrdude done.  Thank you.

Perfect. I then read the existing flash image off the device, disassembled it, worked out it was based on V-USB and then proceeded to work out that the only interesting extra bit was that the relay was hanging off pin 3 on IO port B. Which led to me knocking up what I thought should be a functionally equivalent version of the firmware, available locally or on GitHub. It’s worked with my basic testing so far and has confirmed to me I understand how the board is set up, meaning I can start to think about what else I could do with it…

May 18, 2016 09:25 PM

May 16, 2016

Jess Rowbottom

Love Love Peace Peace: Adventures In Stockholm

Those of you who are regular readers will know of my love of all things Eurovision: the song contest which, despite being about us all getting on and loving each other is quite the global battleground. I threw Eurovision parties alongside my ex for around ten years, then in more recent times hosted small intimate get-togethers while friends watched the show. Last year I said I wouldn’t host another one, so throwing caution to the wind I booked myself a ticket to Stockholm for the Eurovision Song Contest 2016.

Starting on Friday…

Landing at Arlanda airport I hopped a reassuringly uneventful shuttle train to my hotel at Hammarby Sjöstad, stereotypically furnished like an Ikea showroom. Half an hour of freshening up, and off exploring.

First destination: The Eurovision Village, a small arena with two stages in the centre of Stockholm near the Royal Palace. Sponsors plied their wares with Eurovision gimmicks: you could karaoke on a little stage, dress up as an act, have yourself recorded or photographed and publicised. On the stages previous Eurovision acts sang their songs and had a stab at current entries with everyone joining in.

Absolutely everything was Eurovision with banners all over the place. Round the corner sat a huge countdown clock with a queue of camera-wielding public waiting for their selfies. Printed signs in shop windows saying “Eurovision Special! Handbags, 349 SEK!”. Pedestrian crossings played Loreen’s Euphoria while you waited and Måns Zelmerlow’s Heroes when it was time to walk. Rainbow postboxes. Anything which could be linked to Eurovision was – but bizarrely very little actual merchandise save for a small stand with a huge queue in the Village itself. On one side the ABBA Museum had a booth and given I had nothing to do that day other than explore, I booked myself a ticket and hopped a tram.

Eurovision overload in the Museum

Eurovision overload in the Museum

Pop House is a little museum complex housing a couple of exhibitions alongside the ABBA Museum itself, in particular a temporary Eurovision Song Contest Museum. With exhibits such as Conchita Wurst’s Rise Like A Phoenix dress from the Eurovision Final 2014 and ABBA’s award medal from Brighton in 1974 there were certainly lots of things to look at. Through into the ABBA Museum itself it seemed to have it all – from the tour vans which Björn and Benny started their careers with, to a telephone which if it rang was apparently a member of ABBA themselves phoning the museum for a chat, exact reproductions of their studios and management office, vocal booths, and a stage where you could perform as the fifth member of ABBA. Gold discs abounded alongside artefacts from the big tours. Finally, there was an exhibition of Swedish pop history with “stuff” from Roxette, Ace Of Base, Swedish House Mafia and all those sorts of artists. Very enjoyable.

I figured I really should eat something and as I’d been awake since 4am travelling, going back to the hotel restaurant seemed a good idea. Reindeer for tea – a very lean meat, wonderfully presented, with a glass of Shiraz for company. I was about to head to bed when I got a message from my friends James and Simon – did I fancy going to a party? Oh goodness. I booked a ticket, freshened up, and got a cab out to one of the hotels.

The rest of the evening involved dancing, a rather splendidly-presented rhubarb cocktail with a price which would put the GDP of a small country to shame, more dancing, ridiculously expensive G&Ts which were 98% gin with the tonic waved from a distance, watching a couple of past Eurovision acts play live, and a nice chat with Charlotte Perelli who sang Hero for Switzerland in 2008. And that was just Friday!

Saturday’s Daytime Amusement

Saturday brought an opportunity for exploration, so off up to Gamla Stan (Stockholm’s Old Town) and shopping. It’s quite a nice area with lots of little shops to poke around in: plenty of tourist-trap places selling Swedish-themed tat but a few rubies in the dust, such as a nice little jewellery shop with a shopkeeper who was only too happy to discuss who might win. She’d been to the jury final on Friday night and I had to repeatedly ask that she not spoil it for me, so I figured I was in for a treat later.

A happy discovery: Sweden has more tall women, which means the shops are better stocked for us lanky birds and at a reasonable cost too, and everyone’s so friendly – even the lady in H&M taught me some Swedish when I apologetically mumbled I didn’t understand what she’d asked of me. I started with “hej” (hello) and “tack” (thanks), and the basic number system. On the topic of language the pronunciation of “j” as “i” confused me when I got in an Über cab where they already know your name, and would say “yes?” and I’d reply “er, yes!”, followed by “no, no – your name is yes?”… “ohhh, Jess, it’s pronounced with a J, I do apologise.” (Apologies by default, how British…)

Lunch was meatballs. Well, you have to, don’t you… none of this Ikea stuff here! More poking around in shops, and back to the hotel to get ready.

The Grand Final

Off to Eurovision, brb!I wasn’t intending on overdoing the patriotic outfit and in the end I think I scrubbed up pretty well: a red frock, Union flag scarf, flag earrings and red heels – possibly even subconsciously channelling Scooch. I was feeling a little self-conscious as I got the tram but any fears evaporated away once I saw other Brits. Despite being on my own for a substantial part of my visit, I always could find someone to chat Eurovision with!

From the moment I arrived at Globen it was insane. People dressed up, dancing, singing. I joined a small group of Brits outside being interviewed for a French TV station, singing Joe and Jake’s entry at the top of our voices (apparently it’s been on telly though I’ve yet to see it). Unfeasible movement-limiting outfits from years gone by. A guy in a flesh-coloured skin suit (so he looked nude) with a wolf protecting his modesty. Serbian human disco balls. Spanish milk-maids. Companies giving out leaflets and free samples everywhere. Promoters touting unofficial afterparties. Journalists. Cameras. All utterly bonkers.

My ticket had me in the Tele2 Arena next door to the final itself, for the “Eurovision Party”. Security was a quick pat-down and in I went. I found another group of Brits (look for the flag – these lovely folks from Liverpool had bunting!) and hooray, we had beer too! In our Arena a few acts were performing before the Contest itself began, most notable of which was 2012 winner Loreen singing Euphoria live: this was my coming-out song a few years ago and it might seem soppy but the music still takes me back to that emotional rollercoaster.

We had a short informational note telling us when the broadcast would come live from our bit, and then the big screen went live to the stage next door. Not just the stage though, we had little screens showing us the Eurovision control room, the backstage area (fascinating watching  the acts prepare and all the movement that goes on around staging and scenery) and the artists’ green room. A countdown to going live across the world… and… Good Evening Europe!

That's Mans Zelmerlow up there on stage...

That’s Mans Zelmerlow up there on stage…

I’m not ashamed to say I danced. Belgium opening the Contest was excellent as it got us “in the mood”, that Little-Boots-Meets-Uptown-Funk thing. I’ve never danced while the competition was actually airing and it was so liberating, especially with others dancing around and the occasional witty remark between countrymen. As Poland took to the stage I decided I needed another beer – bad timing, I heard an announcement we were going live imminently!

Simon and I quickly elbowed our way forward, and suddenly we were live with Måns Zelmerlow and two previous entrants! Unashamedly, I waved – and if you watch in HD you can see me and my unfeasibly long arm. So there, I was being a numpty on the official Eurovision coverage – that’s one off the bucket-list.

The performances went on and by the time we reached the UK’s entry – 25th down the line – I’d been joined by Jon who runs The Thoroughly Good Blog. We stood, nervous, tense, draped in my Union Flag scarf, and watched Joe and Jake storm it. The performance was flawless, the song a decent entry. We raised a glass.

I really very much enjoyed Love Love Peace Peace, the interval musical number performed by Petra Mede and Måns Zelmerlow alongside Eurovision stars such as Alexander Rybak and Lordi. I do hope they release it as a single, I’d certainly buy it but given the only copies of the 2013 song Swedish Smorgasbord are bootlegs I’m not holding out much hope.

Standing near the front I was grabbed by two women who said “You have to come this way! Come over here! We need a photo!” – it turns out I was as tall as their friend Agneta, and this was new. I posed with her for a photo and we danced for a while, chatting away and realising we were both the same sizes. Nice woman, maybe we’ll meet again in a couple of years and dance again!


Me and Agneta, tall girls together.

Me and Agneta, tall girls together.

I’m not a fan of the new scoring system, which separately takes the jury votes and the televotes to the net effect of jumbling the scoreboard again halfway through proceedings (still, I bet it made the drinking game in Wrenthorpe somewhat more hectic).

We were crossing our fingers it wouldn’t be Russia and by the end of the jury voting it did indeed seem we had a winner in Australia; apparently over in the Press Centre someone was already booking hotels and flights for Berlin, strongly rumoured to be the venue.

The televotes came in and… well, Ukraine got it didn’t they. I don’t think we were all massively surprised, but certainly not elated as we should have been. I’d been talking to other fans and we were all glumly resigned to Russia winning although nobody seemed to want it, so surely Ukraine’s good, right? Maybe not. Their LGBT rights abuse record and violence is well documented and I’m not sure I’d want to go somewhere my girlfriend and I don’t feel safe.

Put simply, UK entry Joe & Jake were shortchanged: we had a very good song, and I don’t consider it could have been executed better. Maybe the problem is all in the promotion now, but when it comes down to it for all that we laughed along with Terry Wogan, the casual xenophobia did a lot of damage and will take a long time to blow over.

In the aftermath, I ventured to the arena lounge and stopped with new friends until being chucked out at 4am – slightly cheaper gin, I only had to hawk the one kidney. As the sun came up I hopped a cab back to the hotel and collapsed into bed: ouchie sore feet from wearing heels for 8 hours, a slight smell of stale beer spilled onto me by other partygoers, and a total buzz from the night.


Home with a mug of tea, the Official Programme and lots of lovely memories.

Home with a mug of tea, the Official Programme and lots of lovely memories.

Sunday brought more wandering around Stockholm, a trip back to Gamla Stan, and through the remains of the Eurovision Village; it was nice to see someone else clearing up after a party for a change although it amplified my comedown somewhat. Making it back to the hotel to collect my bag and thus to Arlanda, I saw various Brits on the way and we chatted about the result, what it meant and where we might have gone wrong. Everyone had an opinion, but none of them pointed to our entry itself. Still, there’s always next year isn’t there.

(A little surreal to see the Israeli entry at the airport queueing for the security check next to me, as well.)

I’m saddened it came down to a protest vote – at least on the surface of it. This isn’t what Eurovision is about, and for all the bitching folks do about the politics of voting for neighbours it’s not affected it that much until this year. Russia are full of sour grapes, some press are talking about their country being the laughing-stock of Europe, and most have an analysis of the impact on 2017’s event (Guardian link here).

Would I go to the Eurovision Grand Final again? Absolutely. It’s like an addiction, and I will definitely be up for it especially when it’s in Europe itself. I’ve met some lovely people from wandering around the town, in the Arena, even on the flight home. However I’m not sure that stretches to the risk of personal safety and being beaten up (or worse) in Ukraine – plus it’s a bit far to go for a weekend…

…I might go back to submitting a song though. Hmm.


Header photo credit: Jason Foley-Doherty.

by Jess at May 16, 2016 03:52 PM

May 13, 2016

Liam Proven

The decline & fall of DEC - & MS

(Repurposed email reply)

Although I was educated & worked with DEC systems, I didn't have much to do with the company itself. Its support was good, the kit ludicrously expensive, and the software offerings expensive, slow and lacking competitive features. However, they also scored in some ways.

My 60,000' view:

Microsoft knew EXACTLY what it was doing with its practices when it built up its monopoly. It got lucky with the technology: its planned future super products flopped, but it turned on a dime & used what worked.

But killing its rivals, any potential rival? Entirely intentional.

The thing is that no other company was poised to effectively counter the MS strategy. Nobody.

MS' almost-entirely-software-only model was almost unique. Its ecosystem of apps and 3rd party support was unique.

In the end, it actually did us good. Gates wanted a computer on every desk. We got that.

The company's strategy called for open compatible generic hardware. We got that.

Only one platform, one OS, was big enough, diverse enough, to compete: Unix.

But commercial, closed, proprietary Unix couldn't. 2 ingredients were needed:

#1 COTS hardware - which MS fostered;
#2 FOSS software.

Your point about companies sharing their source is noble, but I think inadequate. The only thing that could compete with a monolithic software monopolist on open hardware was open software.

MS created the conditions for its own doom.

Apple cleverly leveraged FOSS Unix and COTS X86 hardware to take the Mac brand and platform forward.

Nobody else did, and they all died as a result.

If Commodore, Atari and Acorn had adopted similar strategies (as happened independently of them later, after their death, resulting in AROS, AFROS & RISC OS Open), they might have lived.

I can't see it fitting the DEC model, but I don't know enough. Yes, cheap low-end PDP-11s with FOSS OSes might have kept them going longer, but not saved them.

The deal with Compaq was catastrophic. Compaq was in Microsoft's pocket. I suspect that Intel leant on Microsoft and Microsoft then leant on Compaq to axe Alpha, and Compaq obliged. It also knifed HP OpenMail, possibly the Unix world's only viable rival to Microsoft Exchange.

After that it was all over bar the shouting.

Microsoft could not have made a success of OS/2 3 without Dave Cutler... But DEC couldn't have made a success out of PRISM either, I suspect. Maybe a stronger DEC would have meant Windows NT would never have happened.

May 13, 2016 11:51 AM

May 06, 2016

Andy Smith (

Using a TOTP app for multi-factor SSH auth

I’ve been playing around with enabling multi-factor authentication (MFA) on web services and went with TOTP. It’s pretty simple to implement in Perl, and there are plenty of apps for it including Google Authenticator, 1Password and others.

I also wanted to use the same multi-factor auth for SSH logins. Happily, from Debian jessie onwards libpam-google-authenticator is packaged. To enable it for SSH you would just add the following:

auth required

to /etc/pam.d/sshd (put it just after @include common-auth).

and ensure that:

ChallengeResponseAuthentication yes

is in /etc/ssh/sshd_config.

Not all my users will have MFA enabled though, so to skip prompting for these I use:

auth required nullok

Finally, I only wanted users in a particular Unix group to be prompted for an MFA token so (assuming that group was totp) that would be:

auth [success=1 default=ignore] quiet user notingroup totp
auth required nullok

If the pam_succeed_if conditions are met then the next line is skipped, so that causes pam_google_authenticator to be skipped for users not in the group totp.

Each user will require a TOTP secret key generating and storing. If you’re only setting this up for SSH then you can use the google-authenticator binary from the libpam-google-authenticator package. This asks you some simple questions and then populates the file $HOME/.google_authenticator with the key and some configuration options. That looks like:

" RATE_LIMIT 3 30 1462548404

The first line is the secret key; the five numbers are emergency codes that will always work (once each) if locked out.

If generating keys elsewhere then you can just populate this file yourself. If the file isn’t present then that’s when “nullok” applies; without “nullok” authentication would fail.

Note that despite the repeated mentions of “google” here, this is not a Google-specific service and no data is sent to Google. Google are the authors of the open source Google Authenticator mobile app and the libpam-google-authenticator PAM module, but (as evidenced by the Perl example) this is an open standard and client and server sides can be implemented in any language.

So that is how you can make a web service and an SSH service use the same TOTP multi-factor authentication.

by Andy at May 06, 2016 04:34 PM

April 28, 2016

Steve Kennedy

Misfit Ray, it might actually be the first wearable that actually looks like jewellery

A while back, Misfit released the Ray. Basically a tube with a single LED and straps coming out either side (initially only silicon, which are a bit ugly, but now leather straps are available, which look much nicer - though obviously not made for sports/water). The tube is made from aluminium and comes in Carbon Black or Rose Gold.

Apart from the lack of LED's, the Ray has pretty much the same functionality as the Shine2 and measures steps, activities and sport and works with Misfit Link to trigger actions (and can link to IFTT to trigger pretty much anything).

Progress is tracked by the LED flashing different colours (under 25%, 25%+. 50%+, 75%+ and 100%+ i.e. goal met) and it flashes blue when syncing with the Misfit app over Bluetooth (it supports Bluetooth version 4.1). It will;l also indicate incoming calls, incoming texts and wake-up alarm.

The Shine2 uses a single CR2032 battery while the Ray now uses 3 x 393 button cells (which should also give 6 months usage).

The Ray is also 50m water resistant so can be used for swimming.

Misfit are promising a range of new straps and other accessories so it can be worn, say, as a pendant.

The sport band version retails for £72.87 and the leather for £87.45, not the cheapest units out there, but probably (at least for now) the prettiest.

by Steve Karmeinsky ( at April 28, 2016 06:57 PM

April 27, 2016

Liam Proven

Where did we all go wrong? And why doesn't anyone remember? [Tech blog post]

My contention is that a large part of the reason that we have the crappy computers that we do today -- lowest-common-denominator boxes, mostly powered by one of the kludgiest and most inelegant CPU architectures of the last 40 years -- is not technical, nor even primarily commercial or due to business pressures, but rather, it's cultural.

When I was playing with home micros (mainly Sinclair and Amstrad; the American stuff was just too expensive for Brits in the early-to-mid 1980s), the culture was that Real Men programmed in assembler and the main battle was Z80 versus 6502, with a few weirdos saying that 6809 was better than either. BASIC was the language for beginners, and a few weirdos maintained that Forth was better.

At university, I used a VAXcluster and learned to program in Fortran-77. The labs had Acorn BBC Micros in -- solid machines, the best 8-bit BASIC ever, and they could interface both with lab equipment over IEEE-488 and with generic printers and so on over Centronics parallel and its RS-423 interface [EDIT: fixed!], which could talk to RS-232 kit.

As I discovered when I moved into the professional field a few years later (1988), this wasn't that different from the pro stuff. A lot of apps were written in various BASICs, and in the old era of proprietary OSes on proprietary kit, for performance, you used assembler.

But a new wave was coming. MS-DOS was already huge and the Mac was growing strongly. Windows was on v2 and was a toy, but Unix was coming to mainstream kit, or at least affordable kit. You could run Unix on PCs (e.g. SCO Xenix), on Macs (A/UX), and my employers had a demo IBM RT-6150 running AIX 1.

Unix wasn't only the domain (pun intentional) of expensive kit priced in the tens of thousands.

A new belief started to spread: that if you used C, you could get near-assembler performance without the pain, and the code could be ported between machines. DOS and Mac apps started to be written (or rewritten) in C, and some were even ported to Xenix. In my world, nobody used stuff like A/UX or AIX, and Xenix was specialised. I was aware of Coherent as the only "affordable" Unix, but I never saw a copy or saw it running.

So this second culture of C code running on non-Unix OSes appeared. Then the OSes started to scramble to catch up with Unix -- first OS/2, then Windows 3, then the for a decade parallel universe of Windows NT, until XP became established and Win9x finally died. Meanwhile, Apple and IBM flailed around, until IBM surrendered, Apple merged with NeXT and switched to NeXTstep.

Now, Windows is evolving to be more and more Unix-like, with GUI-less versions, clean(ish) separation between GUI and console apps, a new rich programmable shell, and so on.

While the Mac is now a Unix box, albeit a weird one.

Commercial Unix continues to wither away. OpenVMS might make a modest comeback. IBM mainframes seem to be thriving; every other kind of big iron is now emulated on x86 kit, as far as I can tell. IBM has successfully killed off several efforts to do this for z Series.

So now, it's Unix except for the single remaining mainstream proprietary system: Windows. Unix today means Linux, while the weirdoes use FreeBSD. Everything else seems to be more or less a rounding error.

C always was like carrying water in a sieve, so now, we have multiple C derivatives, trying to patch the holes. C++ has grown up but it's like Ada now: so huge that nobody understands it all, but actually, a fairly usable tool.

There's the kinda-sorta FOSS "safe C++ in a VM", Java. The proprietary kinda-sorta "safe C++ in a VM", C#. There's the not-remotely-safe kinda-sorta C in a web browser, Javascript.

And dozens of others, of course.

Even the safer ones run on a basis of C -- so the lovely cuddly friendly Python, that everyone loves, has weird C printing semantics to mess up the heads of beginners.

Perl has abandoned its base, planned to move onto a VM, then the VM went wrong, and now has a new VM and to general amazement and lack of interest, Perl 6 is finally here.

All the others are still implemented in C, mostly on a Unix base, like Ruby, or on a JVM base, like Clojure and Scala.

So they still have C like holes and there are frequent patches and updates to try to make them able to retain some water for a short time, while the "cyber criminals" make hundreds of millions.

Anything else is "uncommercial" or "not viable for real world use".

Borland totally dropped the ball and lost a nice little earner in Delphi, but it continues as Free Pascal and so on.

Apple goes its own way, but has forgotten the truly innovative projects it had pre-NeXT, such as Dylan.

There were real projects that were actually used for real work, like Oberon the OS, written in Oberon the language. Real pioneering work in UIs, such as Jef Raskin's machines, the original Mac and Canon Cat -- forgotten. People rhapsodise over the Amiga and forget that the planned OS, CAOS, to be as radical as the hardware, never made it out of the lab. Same, on a smaller scale, with the Acorn Archimedes.

Despite that, of course, Lisp never went away. People still use it, but they keep their heads down and get on with it.

Much the same applies to Smalltalk. Still there, still in use, still making real money and doing real work, but forgotten all the same.

The Lisp Machines and Smalltalk boxes lost the workstation war. Unix won, and as history is written by the victors, now the alternatives are forgotten or dismissed as weird kooky toys of no serious merit.

The senior Apple people didn't understand the essence of what they saw at PARC: they only saw the chrome. They copied the chrome, not the essence, and now all that any of us have is the chrome. We have GUIs, but on top of the nasty kludgy hacks of C and the like. A late-'60s skunkware project now runs the world, and the real serious research efforts to make something better, both before and after, are forgotten historical footnotes.

Modern computers are a vast disappointment to me. We have no thinking machines. The Fifth Generation, Lisp, all that -- gone.

What did we get instead?

Like dinosaurs, the expensive high-end machines of the '70s and '80s didn't evolve into their successors. They were just replaced. First little cheapo 8-bits, not real or serious at all, although they were cheap and people did serious stuff with them because it's all they could afford. The early 8-bits ran semi-serious OSes such as CP/M, but when their descendants sold a thousand times more, those descendants weren't running descendants of that OS -- no, it and its creator died.

CP/M evolved into a multiuser multitasking 386 OS that could run multiple MS-DOS apps on terminals, but it died.

No, then the cheapo 8-bits thrived in the form of an 8/16-bit hybrid, the 8086 and 8088, and a cheapo knock-off of CP/M.

This got a redesign into something grown-up: OS/2.

Predictably, that died.

So the hacked-together GUI for DOS got re-invigorated with an injection of OS/2 code, as Windows 3. That took over the world.

The rivals - the Amiga, ST, etc? 680x0 chips, lots of flat memory, whizzy graphics and sound? All dead.

Then Windows got re-invented with some OS/2 3 ideas and code, and some from VMS, and we got Windows NT.

But the marketing men got to it and ruined its security and elegance, to produce the lipstick-and-high-heels Windows XP. That version, insecure and flakey with its terrible bodged-in browser, that, of course, was the one that sold.

Linux got nowhere until it copied the XP model. The days of small programs, everything's a text file, etc. -- all forgotten. Nope, lumbering GUI apps, CORBA and RPC and other weird plumbing, huge complex systems, but it looks and works kinda like Windows and a Mac now so it looks like them and people use it.

Android looks kinda like iOS and people use it in their billions. Newton? Forgotten. No, people have Unix in their pocket, only it's a bloated successor of Unix.

The efforts to fix and improve Unix -- Plan 9, Inferno -- forgotten. A proprietary microkernel Unix-like OS for phones -- Blackberry 10, based on QNX -- not Androidy enough, and bombed.

We have less and less choice, made from worse parts on worse foundations -- but it's colourful and shiny and the world loves it.

That makes me despair.

We have poor-quality tools, built on poorly-designed OSes, running on poorly-designed chips. Occasionally, fragments of older better ways, such as functional-programming tools, or Lisp-based development environments, are layered on top of them, but while they're useful in their way, they can't fix the real problems underneath.

Occasionally someone comes along and points this out and shows a better way -- such as Curtis Yarvin's Urbit. Lisp Machines re-imagined for the 21st century, based on top of modern machines. But nobody gets it, and its programmer has some unpleasant and unpalatable ideas, so it's doomed.

And the kids who grew up after C won the battle deride the former glories, the near-forgotten brilliance that we have lost.

And it almost makes me want to cry sometimes.

We should have brilliant machines now, not merely Steve Jobs' "bicycles for the mind", but Gossamer Albatross-style hang-gliders for the mind.

But we don't. We have glorified 8-bits. They multitask semi-reliably, they can handle sound and video and 3D and look pretty. On them, layered over all the rubbish and clutter and bodges and hacks, inspired kids are slowly brute-forcing machines that understand speech, which can see and walk and drive.

But it could have been so much better.

Charles Babbage didn't finish the Difference Engine. It would have paid for him to build his Analytical Engine, and that would have given the Victorian British Empire the steam-driven computer, which would have transformed history.

But he got distracted and didn't deliver.

We started to build what a few old-timers remember as brilliant machines, machines that helped their users to think and to code, with brilliant -- if flawed -- software written in the most sophisticated computer languages yet devised, by the popular acclaim of the people who really know this stuff: Lisp and Smalltalk.

But we didn't pursue them. We replaced them with something cheaper -- with Unix machines, an OS only a nerd could love. And then we replaced the Unix machines with something cheaper still -- the IBM PC, a machine so poor that the £125 ZX Spectrum had better graphics and sound.

And now, we all use descendants of that. Generally acknowledged as one of the poorest, most-compromised machines, based on descendants of one of the poorest, most-compromised CPUs.

Yes, over the 40 years since then, most of rough edges have been polished out. The machines are now small, fast, power-frugal with tons of memory and storage, with great graphics and sound. But it's taken decades to get here.

And the OSes have developed. Now they're feature-rich, fairly friendly, really very robust considering the stone-age stuff they're built from.

But if we hadn't spent 3 or 4 decades making a pig's ear into silk purse -- if we'd started with a silk purse instead -- where might we have got to by now?

April 27, 2016 05:06 PM

April 26, 2016

Jonathan McDowell

Notes on Kodi + IR remotes

This post is largely to remind myself of the details next time I hit something similar; I found bits of relevant information all over the place, but not in one single location.

I love Kodi. These days the Debian packages give me a nice out of the box experience that is easy to use. The problem comes in dealing with remote controls and making best use of the available buttons. In particular I want to upgrade the VDR setup my parents have to a more modern machine that’s capable of running Kodi. In this instance an AMD E350 nettop, which isn’t recent but does have sufficient hardware acceleration of video decoding to do the job. Plus it has a built in fintek CIR setup.

First step was finding a decent remote. The fintek is a proper IR receiver supported by the in-kernel decoding options, so I had a lot of flexibility. As it happened I ended up with a surplus to requirements Virgin V Box HD remote (URC174000-04R01). This has the advantage of looking exactly like a STB remote, because it is one.

Pointed it at the box, saw that the fintek_cir module was already installed and fired up irrecord. Failed to get it to actually record properly. Googled lots. Found ir-keytable. Fired up ir-keytable -t and managed to get sensible output with the RC-5 decoder. Used irrecord -l to get a list of valid button names and proceed to construct a vboxhd file which I dropped in /etc/rc_keymaps/. I then added a

fintek-cir * vboxhd

line to /etc/rc_maps.cfg to force my new keymap to be loaded on boot.

That got my remote working, but then came the issue of dealing with the fact that some keys worked fine in Kodi and others didn’t. This seems to be an issue with scancodes above 0xff. I could have remapped the remote not to use any of these, but instead I went down the inputlirc approach (which is already in use on the existing VDR box).

For this I needed a stable device file to point it at; the /dev/input/eventN file wasn’t stable and as a platform device it didn’t end up with a useful entry in /dev/input/by-id. A ‘quick’

udevadm info -a -p $(udevadm info -q path -n /dev/input/eventN)

provided me with the PNP id (FIT0002) allowing me to create /etc/udev/rules.d/70-remote-control.rules containing


Bingo, a /dev/input/remote symlink. /etc/defaults/inputlirc ended up containing:

OPTIONS="-g -m 0"

The options tell it to grab the device for its own exclusive use, and to take all scancodes rather than letting the keyboard ones through to the normal keyboard layer. I didn’t want anything other than things specifically configured to use the remote to get the key presses.

At this point Kodi refused to actually do anything with the key presses. Looking at ~kodi/.kodi/temp/kodi.log I could see them getting seen, but not understood. Further searching led me to construct an Lircmap.xml - in particular the piece I needed was the <remote device="/dev/input/remote"> bit. The existing /usr/share/kodi/system/Lircmap.xml provided a good starting point for what I wanted and I dropped my generated file in ~kodi/.kodi/userdata/.

(Sadly it turns out I got lucky with the remote; it seems to be using the RC-5x variant which was broken in 3.17; works fine with the 3.16 kernel in Debian 8 (jessie) but nothing later. I’ve narrowed down the offending commit and raised #117221.)

Helpful pages included:

April 26, 2016 08:32 PM

April 25, 2016

Liam Proven

Acorn: from niche to forgotten obscurity and total industry dominance at the same time

More retrocomputing meanderings -- whatever became of the ST, Amiga and Acorn operating systems?

The Atari ST's GEM desktop also ran on MS-DOS, DR's own DOS+ (a forerunner of the later DR-DOS) and today is included with FreeDOS. In fact the first time I installed FreeDOS I was *very* surprised to find my name in the credits. I debugged some batch files used in installing the GEM component.

The ST's GEM was the same environment. ST GEM was derived from GEM 1; PC GEM from GEM 2, crippled after an Apple lawsuit. Then they diverged. FreeGEM attempted to merge them again.

But the ST's branch prospered, before the rise of the PC killed off all the alternative platforms. Actual STs can be quite cheap now, or you can even buy a modern clone:

If you don't want to lash out but have a PC, the Aranym environment gives you something of the feel of the later versions. It's not exactly an emulator, more a sort of compatibility environment that enhances the "emulated" machine as much as it can using modern PC hardware.

And the ST GEM OS was so modular, different 3rd parties cloned every components, separately. Some commercially, some as FOSS. The Aranym team basically put together a sort of "distribution" of as many FOSS components as they could, to assemble a nearly-complete OS, then wrote the few remaining bits to glue it together into a functional whole.

So, finally, after the death of the ST and its clones, there was an all-FOSS OS for it. It's pretty good, too. It's called AFROS, Atari Free OS, and it's included as part of Aranym.

I longed to see a merger of FreeGEM and Aranym, but it was never to be.

The history of GEM and TOS is complex.

Official Atari TOS+GEM evolved into TOS 4, which included the FOSS Mint multitasking later, which isn't much like the original ROM version of the first STs.

The underlying TOS OS is not quite like anything else.

AIUI, CP/M-68K was a real, if rarely-seen, OS.

However, it proved inadequate to support GEM, so it was discarded. A new kernel was written using some of the tech from what was later to become DR-DOS on the PC -- something less like CP/M and more like MS-DOS: directories, separated with backslashes; FAT format disks; multiple executable types, 8.3 filenames, all that stuff.

None of the command-line elements of CP/M or any DR DOS-like OS were retained -- the kernel booted the GUI directly and there was no command line, like on the Mac.

This is called GEMDOS and AIUI it inherits from both the CP/M-68K heritage and from DR's x86 DOS-compatible OSes.

The PC version of GEM also ran on Acorn's BBC Master 512 which had an Intel 80186 coprocessor. It was a very clever machine, in a limited way.

Acorn's series of machines are not well-known in the US, AFAICT, and that's a shame. They were technically interesting, more so IMHO than the Apple II and III, TRS-80 series etc.

The original Acorns were 6502-based, but with good graphics and sound, a plethora of ports, a clear separation between OS, BASIC and add-on ROMs such as the various DOSes, etc. The BASIC was, I'd argue strongly, *the* best 8-bit BASIC ever: named procedures, local variables, recursion, inline assembler, etc. Also the fastest BASIC interpreter ever, and quicker than some compiled BASICs.

Acorn built for quality, not price; the machines were aimed at the educational market, which wasn't so price-sensitive, a model that NeXT emulated. Home users were welcome to buy them & there was one (unsuccessful) home model, but they were unashamedly expensive and thus uncompromised.

The only conceptual compromise in the original BBC Micro was that there was provision for ROM bank switching, but not RAM. The 64kB memory map was 50:50 split ROM and RAM. You could switch ROMs, or put RAM in their place, but not have more than 64kB. This meant that the high-end machine had only 32kB RAM, and high-res graphics modes could take 21kB or so, leaving little space for code -- unless it was in ROM, of course.

The later BBC+ and BBC Master series fixed that. They also allowed ROM cartridges, rather than bare chips inserted in sockets on the main board, and a numeric keypad.

Acorn looked at the 16-bit machines in the mid-80s, mostly powered by Motorola 68000s of course, and decided they weren't good enough and that the tiny UK company could do better. So it did.

But in the meantime, it kept the 6502-based, resolutely-8-bit BBC Micro line alive with updates and new models, including ROM-based terminals and machines with a range of built-in coprocessors: faster 6502-family chips for power users, Z80s for CP/M, Intel's 80186 for kinda-sorta PC compatibility, the NatSemi 32016 with PANOS for ill-defined scientific computing, and finally, an ARM copro before the new ARM-based machines were ready.

Acorn designed the ARM RISC chip in-house, then launched its own range of ARM-powered machines, with an OS based on the 6502 range's. Although limited, this OS is still around today and can be run natively on a Raspberry Pi:

It's very idiosyncratic -- both the filesystem, the command line and the default editor are totally unlike anything else. The file-listing command is CAT, the directory separator is a full stop (i.e. a period), while the root directory is called $. The editor is a very odd dual-cursor thing. It's fascinating, totally unrelated to the entire DEC/MS-DOS family and to the entire Unix family. There is literally and exactly nothing else even slightly like it.

It was the first GUI OS to implement features that are now universal across GUIs: anti-aliased font rendering, full-window dragging and resizing (as opposed to an outline), and significantly, the first graphical desktop to implement a taskbar, before NeXTstep and long before Windows 95.

It supports USB, can access the Internet and WWW. There are free clients for chat, email, FTP, the WWW etc. and a modest range of free productivity tools, although most things are commercial.

But there's no proper inter-process memory protection, GUI multitasking is cooperative, and consequently it's not amazingly stable in use. It does support pre-emptive multitasking, but via the text editor, bizarrely enough, and only of text-mode apps. There was also a pre-emptive multitasking version of the desktop, but it wasn't very compatible, didn't catch on and is not included in current versions.

But saying all that, it's very interesting, influential, shared-source, entirely usable today, and it runs superbly on the £25 Raspberry Pi, so there is little excuse not to try it. There's also a FOSS emulator which can run the modern freeware version:

For users of the old hardware, there's a much more polished commercial emulator for Windows and Mac which has its own, proprietary fork of the OS:

There's an interesting parallel with the Amiga. Both Acorn and Commodore had ambitious plans for a modern multitasking OS which they both referred to as Unix-like. In both cases, the project didn't deliver and the ground-breaking, industry-redefiningly capable hardware was instead shipped with much less ambitious OSes, both of which nonetheless were widely-loved and both of which still survive in the form of multiple, actively-maintained forks, today, 30 years later -- even though Unix in fact caught up and long surpassed these 1980s oddballs.

AmigaOS, based in part on the academic research OS Tripos, has 3 modern forks: the FOSS AROS, on x86, and the proprietary MorphOS and AmigaOS 4 on PowerPC.

Acorn RISC OS, based in part on Acorn MOS for the 8-bit BBC Micro, has 2 contemporary forks: RISC OS 5, owned by Castle Technology but developed by RISC OS Open, shared source rather than FOSS, running on Raspberry Pi, BeagleBoard and some other ARM boards, plus some old hardware and RPC Emu; and RISC OS 4, now owned by the company behind VirtualAcorn, run by an ARM engineer who apparently made good money selling software ARM emulators for x86 to ARM holdings.

Commodore and the Amiga are both long dead and gone, but the name periodically changes hands and reappears on various bits of modern hardware.

Acorn is also long dead, but its scion ARM Holdings designs the world's most popular series of CPUs, totally dominates the handheld sector, and outsells Intel, AMD & all other x86 vendors put together something like tenfold.

Funny how things turn out.

April 25, 2016 01:55 PM

April 18, 2016

Jonathan McDowell

Going to DebConf 16

Going to DebConf16

Whoop! Looking forward to it already (though will probably spend it feeling I should be finishing my dissertation).


2016-07-01 15:20 DUB -> 16:45 LHR BA0837
2016-07-01 21:35 LHR -> 10:00 CPT BA0059


2016-07-10 19:20 CPT -> 06:15 LHR BA0058
2016-07-11 09:20 LHR -> 10:45 DUB BA0828

(image stolen from Gunnar)

April 18, 2016 01:12 PM

April 13, 2016

Jonathan McDowell

Software in the Public Interest contributing members: Check your activity status!

That’s a longer title than I’d like, but I want to try and catch the attention of anyone who might have missed more directed notifications about this. If you’re not an SPI contributing member there’s probably nothing to see here…

Although I decided not to stand for re-election at the Software in the Public Interest (SPI) board elections last July, I haven’t stopped my involvement with the organisation. In particular I’ve spent some time working on an overhaul of the members website and rolling it out. One of the things this has enabled is implementation of 2009-11-04.jmd.1: Contributing membership expiry, by tracking activity in elections and providing an easy way for a member to indicate they consider themselves active even if they haven’t voted.

The plan is that this will run at some point after the completion of every board election. A first pass of cleanups was completed nearly a month ago, contacting all contributing members who’d never been seen to vote and asking them to update their status if they were still active. A second round, of people who didn’t vote in the last board election (in 2014), is currently under way. Affected members will have been emailed directly and there was a mail to spi-announce, but I’m aware people often overlook these things or filter mail off somewhere that doesn’t get read often.

If you are an SPI Contributing member who considers themselves an active member I strongly recommend you login to the SPI Members Website and check the “Last active” date displayed is after 2014-07-14 (i.e. post the start of the last board election). If it’s not, click on the “Update” link beside the date. The updated date will be shown once you’ve done so.

Why does pruning inactive members matter? The 2015 X.Org election results provide at least one indication of why ensuring you have an engaged membership is important - they failed to make a by-laws change that a vast majority of votes were in favour of, due to failing to make quorum. (If you’re an member, go vote!)

April 13, 2016 12:04 PM

April 12, 2016

Jess Rowbottom

Isn’t It Bleeding Obvious?

Having a bit of spare time on my hands over Christmas last year I started writing music again, all leading up to an album release on 17th November 2016, a year after the first song was written. It’s a bit of a mix of genres and styles, but I play most things on there, involving talented pals when it feels right.

You can find out more about it (including a rather nice interview written by student journo Andy Carson) over on The Bleeding Obvious website.

There’s also a Facebook page, and a Twitter stream – of course there is, this is 2016…

by Jess at April 12, 2016 07:16 AM

April 06, 2016

Andy Smith (

rsync and sudo conundrum


  • You’re logged in to hostA
  • You need to rsync some files from hostB to hostA
  • The files on hostB are only readable by root and they must be written by root locally (hostA)
  • You have sudo access to root on both
  • You have ssh public key access to both
  • root can’t ssh between the two

Normally you’d do this:

hostA$ rsync -av hostB:/foo/ /foo/

but you can’t because your user can’t read /foo on hostB.

So then you might try making rsync run as root on hostB:

hostA$ rsync --rsync-path='sudo rsync' -av hostB:/foo/ /foo/

but that fails because ssh needs a pseudo-terminal to ask you for your sudo password on hostB:

sudo: no tty present and no askpass program specified
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: error in rsync protocol data stream (code 12) at io.c(226) [Receiver=3.1.1]

So then you can try giving it an askpass program:

hostA$ rsync \
       --rsync-path='SUDO_ASKPASS=/usr/bin/ssh-askpass sudo rsync' \
       -av hostB:/foo/ /foo/

and that nearly works! It pops up an askpass dialog (so you need X11 forwarding) which takes your password and does stuff as root on hostB. But ultimately fails because it’s running as your unprivileged user locally (hostA) and can’t write the files. So then you try running the lot under sudo:

hostA$ sudo rsync \
       --rsync-path='SUDO_ASKPASS=/usr/bin/ssh-askpass sudo rsync' \
       -av hostB:/foo/ /foo/

This fails because X11 forwarding doesn’t work through the local sudo. So become root locally first, then tell rsync to ssh as you:

hostA$ sudo -i
hostA# rsync \
       -e 'sudo -u youruser ssh' \
       --rsync-path 'SUDO_ASKPASS=/usr/bin/ssh-askpass sudo rsync'\
       -av hostB:/foo /foo


Answer cobbled together with help from dutchie, dne and dg12158. Any improvements? Not needing X11 forwarding would be nice.

Alternate methods:

  • Use tar:
    $ ssh \
      -t hostB 'sudo tar -C /foo -cf - .' \
      | sudo tar -C /foo -xvf -
  • Add public key access for root
  • Use filesystem ACLs to allow unprivileged user to read files on hostB.

by Andy at April 06, 2016 02:21 PM

April 05, 2016

Liam Proven

The AmigaOS lives on! It's up to 4.1 now. But is there any point today?

I am told it's lovely to use. Sadly, it only runs on obscure PowerPC-based kit that costs a couple of thousand pounds and can be out-performed by
a £300 PC.

AmigaOS's owners -- Hyperion, I believe -- chose the wrong platform.

On a Raspberry Pi or something, it would be great. On obscure expensive PowerPC kit, no.

Also, saying that, I got my first Amiga in the early 2000s. If I'd had one 15y earlier, I'd probably have loved it, but I bought a 2nd hand
Archimedes instead (and still think it was the right choice for a non-gamer and dabbler in programming).

A few years ago, with a LOT of work using 3 OSes and 3rd-party disk-management tools, I managed to coax MorphOS onto my Mac mini G4.
Dear hypothetical gods, that was a hard install.

It's... well, I mean, it's fairly fast, but... no Wifi? No Bluetooth?

And the desktop. It got hit hard with the ugly stick. I mean, OK, it's not as bad as KDE, but... ick.

Learning AmigaOS when you already know more modern OSes -- OS X, Linux, gods help us, even Windows -- well, the Amiga seems pretty
weird, and often for no good reason. E.g. a graphical file manager, but not all files have icons. They're not hidden, they just don't have
icons, so if you want to see them, you have to do a second show-all operation. And the dependence on RAMdisks, which are a historical curiosity now. And the needing to right-click to show the menu-bar when it's on a screen edge.

A lot of pointless arcana, just so Apple didn't sue, AFAICT.

I understand the love if one loved it back then. But now? Yeeeeeeaaaaaah, not so much.

Not that I'm proclaiming RISC OS to be the business now. I like it, but it's weird too. But AmigaOS does seem a bit primitive now. OTOH, if they sorted out multiprocessor support and memory protection and it ran on cheap ARM kit, then yeah, I'd be interested.

April 05, 2016 12:57 PM

March 31, 2016

Aled Treharne (ThinkSIP)

Twitter debate

As part of the run up to UCExpo I’m going to be taking part in my first Twitter debate this afternoon (Tweebate?). I’ll be taking over the SIPHON twitter account: @SIPHON_Networks.

Feel free to fire me your questions – the theme for the debate is going to the Future of Communications Security, which gives rise to the hashtag for this debate: #comsecfuture

by Aled Treharne at March 31, 2016 12:26 PM

March 30, 2016

Liam Proven

This will get me accused of fanboyism (again), but like it or not, Apple shaped the PC industry.

I recently read that a friend of mine claimed that "Both the iPhone and iPod were copied from other manufacturers, to a large extent."

This is a risible claim, AFAICS.

There were pocket MP3 jukeboxes before the iPod. I still own one. They were fairly tragic efforts.

There were smartphones before the iPhone. I still have at least one of them, too. Again, really tragic from a human-computer interaction point of view.

AIUI, the iPhone originated internally as a shrunk-down tablet. The tablet originated from a personal comment from Bill Gates to Steve Jobs that although tablets were a great idea, people simply didn’t want tablets because Microsoft had made them and they didn’t sell.
Jobs’ response was that the Microsoft ones didn’t sell because they were no good, not because people didn’t want tablets. In particular, Jobs stated that using a stylus was a bad idea. (This is also a pointer was to why he cancelled the Newton. And guess what? I've got one of them, too.)

Gates, naturally, contested this, and Jobs started an internal project to prove him wrong: a stylus-free finger-operated slim light tablet. However, when it was getting to prototype form, he allegedly realised, with remarkable prescience, that the market wasn’t ready yet, and that people needed a first step — a smaller, lighter, simpler, pocketable device, based on the finger-operated tablet.

Looking for a role or function for such a device, the company came up with the idea of a smartphone.

Smartphones certainly existed, but they were a geek toy, nothing more.

Apple was bold enough to make a move that would kill its most profitable line — the iPod — with a new product. Few would be so bold.

I can’t think of any other company that would have been bold enough to invent the iPhone. We might have got to devices as capable as modern smartphones and tablets, but I suspect they’d have still been festooned in buttons and a lot clumsier to use.

It’s the GUI story again. Xerox sponsored the invention and original development but didn’t know WTF to do with it. Contrary to the popular history, it did productise it, but as a vastly expensive specialist tool. It took Apple to make it the standard method of HCI, and it took Apple two goes and many years. The Lisa was still too fancy and expensive, and the original Mac too cut-down and too small and compromised.

The many rivals’ efforts were, in hindsight, almost embarrassingly bad. IBM’s TopView was a pioneering GUI and it was rubbish. Windows 1 and 2 were rubbish. OS/2 1.x was rubbish, and to be honest, OS/2 2.x was the pre-iPhone smartphone of GUI OSes: very capable, but horribly complex and fiddly.

Actually, arguably — and demonstrably, from the Atari ST market — DR GEM was a far better GUI than Windows 1 or 2. GEM was a rip-off of the Mac; the PC version got sued and crippled as a result, so blatant was it. It took MS over a decade to learn from the Mac (and GEM) and produce the first version of Windows with a GUI good enough to rival the Mac’s, while being different enough not to get sued: Windows 95.

Now, 2 decades later, everyone’s GUI borrows from Win95. Linux is still struggling to move on from Win95-like desktops, and even Mac OS X, based on a product which inspired Win95, borrows some elements from the Win95 GUI.

Everyone copies MS, and MS copies Apple. Apple takes bleeding-edge tech and turns geek toys into products that the masses actually want to buy.

Microsoft’s success is founded on the IBM PC, and that was IBM’s response to the Apple ][.

Apple has been doing this consistently for about 40 years. It often takes it 2 or 3 goes, but it does.

  • First time: 8-bit home micros (the Apple ][, an improved version of a DIY kit.)

  • Second time: GUIs (first the Lisa, then the Mac).

  • Third time: USB (on the iMac, arguably the first general-purpose PC designed and sold for Internet access as its primary function).

  • Fourth time: digital music players (the iPod wasn’t even the first with a hard disk).

  • Fifth time: desktop Unix (OS X, based on NeXTstep).

  • Sixth time: smartphones (based on what became the iPad, remember).

  • Seventh time: tablets (the iPad, actually progenitor of the iPhone rather than the other way round).

Yes, there are too many Mac fans, and they’re often under-informed. But there are also far to many Microsoft apologists, and too many Linux ones, too.

I use an Apple desktop, partly because with a desktop, I can choose my own keyboard and pointing device. I hate modern Apple ones.

I don’t use Apple laptops or phones. I’ve owned multiple examples of both. I prefer the rivals.

My whole career has been largely propelled by Microsoft products. I still use some, although my laptops run Linux, which I much prefer.

I am not a fanboy of any of them, but sadly, anyone who expresses fondness or admiration for anything Apple will be inevitably branded as one by the Anti-Apple fanboys, whose ardent advocacy is just as strong and just as irrational.

As will this.

March 30, 2016 06:33 PM

March 26, 2016

Jonathan McDowell

Dr Stoll: Or how I learned to stop worrying and love the GPL

[I wrote this as part of BelFOSS but I think it’s worth posting here.]

My Free Software journey starts with The Cuckoo’s Egg. Back in the early 90s a family friend suggested I might enjoy reading it. He was right; I was fascinated by the world of interconnected machines it introduced me to. That helped start my involvement in FidoNet, but it also got me interested in Unix. So when I saw a Linux book at the Queen’s University bookshop (sadly no longer with us) with a Slackware CD in the back I had to have it.

The motivation at this point was to have a low cost version of Unix I could run on the PC hardware I already owned. I had no knowledge of the GNU Project before this point, and as I wasn’t a C programmer I had no interest in looking at the source code. I spent some time futzing around with it and that partition (I was dual booting with DOS 6.22) fell into disuse. It wasn’t until I’d learnt some C and turned up to university, which provided me with an internet connection and others who were either already using Linux or interested in doing so, that I started running a Linux box full time.

Once I was doing that I became a lot more interested in the Open Source side of the equation. Rather than running a closed operating system that even the API for wasn’t properly specified (or I wouldn’t have needed my copy of Undocumented DOS) I had the complete source to both the underlying OS and all the utilities that it was using. For someone doing a computer science degree this was invaluable. Minix may have been the OS discussed in the OS Design module I studied, but Linux was a much more feature complete option that I was running on my desktop and could also peer under the hood of.

In my professional career I’ve always welcomed the opportunities to work with Open Source. A long time ago I experienced a particularly annoying issue when writing a device driver under QNX. The documentation didn’t seem to match the observed behaviour of the subsystem I was interfacing with. However due to licensing issues only a small number of people in the organisation were able to actually look at the QNX source. So I ended up wasting a much more senior engineer’s time with queries like “I think it’s actually doing x, y and z instead of a, b and c; can you confirm?”. Instances where I can look directly at the source code myself make me much more productive.

Commercial development also started to make me more understanding of the Free Software nature of the code I was running. It wasn’t just the ability to look at the code which was useful, but also the fact there was no need to reinvent the wheel. Need a base OS to build an appliance on? Debian ensures that the main component is Free for all usage. No need to worry about rolling your own compilers, base libraries etc. From a commercial perspective that allows you to concentrate on the actual product. And when you hit problems, the source is available and you can potentially fix it yourself or at least more easily find out if there’s been a fix for that issue released (being able to see code development in version control systems rather than getting a new upstream release with a whole heap on unrelated fixes in it really helps with that).

I had thus progressed from using FLOSS because it was free-as-in-beer, to appreciating the benefits of Open Source in my own learning and employment experiences, to a deeper understanding of the free-as-in-speech benefits that could be gained. However at this point I was still thinking very much from a developer mindset. Even my thoughts about how users can benefit from Free Software were in the context of businesses being able to easily switch suppliers or continue to maintain legacy software because they had the source to their systems available.

One of the major factors that has helped me to see beyond this is the expansion of the Internet of Things (IoT). With desktop or server software there is by and large a choice about what to use. This is not the case with appliances. While manufacturers will often produce a few revisions of software for their devices, usually eventually there is a newer and shiny model and the old one is abandoned. This is problematic for many reasons. For example, historically TVs have been long lived devices (I had one I bought second hand that happily lasted me 7+ years). However the “smart” capabilities of the TV I purchased in 2012 are already of limited usefulness, and LG have moved on to their current models. I have no intention of replacing the device any time soon, so have had to accept it is largely acting as a dumb display. More serious is the lack of security updates. For a TV that doesn’t require a network connection to function this is not as important, but the IoT is a trickier proposition. For example Matthew Garrett had an awful experience with some ‘intelligent’ light bulbs, which effectively circumvented any home network security you might have set up. The manufacturer’s defence? No longer manufactured or supported.

It’s cases like these that have slowly led me to a more complete understanding of the freedom that Free Software truly offers to users. It’s not just about cost free/low cost software. It’s not just about being able to learn from looking at the source to the programs you are running. It’s not even about the freedom to be able to modify the programs that we use. It’s about giving users true Freedom to use and modify their devices as they see fit. From this viewpoint it is much easier to understand the protections against Tivoization that were introduced with GPLv3, and better appreciate the argument sometimes made that the GPL offers more freedom than BSD style licenses.

March 26, 2016 04:28 PM

March 21, 2016

Liam Proven

Confessions of a Sinclair fan

I'm very fond of Spectrums (Spectra?) because they're the first computer I owned. I'd used my uncle's ZX-81, and one belonging to a neighbour, and Commodore PETs at school, but the PET was vastly too expensive and the ZX-81 too limited to be of great interest to me.

I read an article once that praised Apple for bringing home computers to the masses with the Apple ][, the first home computer for under US$ 1000. A thousand bucks? That was fantasy winning-the-football-pools money!

No, for me, the hero of the home computer revolution was Sir Clive Sinclair, for bringing us the first home computer for under GB £100. A hundred quid was achievable. A thousand would have gone on a newer car or a family holiday.

In 1982, my parents could just about afford a 2nd hand 48k Spectrum. I think they paid £80 for it, postage included. I was so excited to receive it, I couldn't wait to try it out. Rearranging a corner with a desk and a portable TV would take too long. So I lay on the lounge floor, Spectrum plugged into the family colour TV and sitting on the carpet. Said carpet, of course, blocked the vents on the bottom of the Speccy so it overheated and died in half an hour.

Happily, it was under guarantee. I sent it back, the original owner got a warranty repair, returned it to me, and I took much better care of it after that.

I learned BASIC and programming on it. My favourite program was Beta BASIC, which improved the language and the editor. I wrote programs under Beta BASIC, being careful to use a subset of BASIC that I could compile with HiSoft BASIC for the best of the Speccy's meagre performance.

I put it in an LMT 68FX2 keyboard for it.

Then an Interface 1 and a microdrive. A terrible storage system. I told myself I was used to Sinclair cost-cutting and it would be OK. It wasn't. It was slow and unreliable and the sub-100 kB capacity was crap. I bought special formatting tools to get more capacity, and the reliability got even worse. My watchword became "save 2 copies of everything!" It still stands me in good stead today, when Microsoft Word crashes occasionally corrupt a document or I absent-mindedly save over something important.

So I replaced the 48k Spectrum with a discount ex-demo 128, bought from Curry's. I could save work-in-progress programs to the RAMdisk, then onto Microdrive when they sort of worked. Annoyingly it wouldn't fit into the keyboard. I put the 48's PCB back into its old case and sold it, and mothballed the keyboard. To my surprise and joy, I found it in 2014 when packing up my house to move abroad. It now has a Raspberry Pi 2 in it, and any day now I will fit my new RasPi 3 into it for extra WLAN goodness.

At Uni, I bought an MGT +D and a 5¼" floppy, plus a cheap Panasonic 9-pin dot-matrix printer. The luxury of fast, reliable storage -- wow! 780kB per disk! Yes, the cool kids had the fancy new 3½" drives, but they cost more and the media were 10x more expensive and I was a poor student.

The +D was horrendously unreliable, and MGT were real stars. They invited me to their Cambridge office where Alan Miles plied me with coffee, showed me around and chatted while Bruce Gordon fixed my interface. The designer himself! How's that for customer service?

I am not sure now, 30 years later, but I think they gave me a very cheap deal on a DISCiPLE because they couldn't get the +D to run reliably. Total stars. I later bought a SAM Coupé out of loyalty, but lovely machine as it was, my Acorn Archimedes A310 was my real love by then. There was really no comparison between even one of the best-designed 8-bit home computers ever and a 32-bit RISC workstation.

I wrote my uni essays on that Spectrum 128; I was the only person in my year at Uni to have their own computer!

Years later, I bought a second 128 from an ad in Micro Mart, just to get the super-rare numeric keypad, Spanish keycaps and all. I sold the computer and kept the keypad.

So I was a Sinclair fan because their low cost meant I could slowly, piecemeal, acquire and expand the machines and peripherals. I never had the money for an up-front purchase of a machine with a decent keyboard and a good BASIC and a disk interface, such as a BBC Micro, much as I would have liked one. I was never interested in the C64 because its BASIC was so poor.

The modern fascination with them mystifies me a bit. I loved mine because it was cheap enough to be accessible to me; those of its limitations that I couldn't fix, such as the poor graphics, or the lack of integer variables and consequent poor BASIC performance, really annoyed me. The crappy keyboard, I replaced. Then the crappy (but, yes, impressively cheap) mass storage: replaced. The BASIC, kinda sorta replaced. Subsequent additions: proper mass storage, better DOS, proper dot-matrix printer on proper Centronics interface.

Later, I even replaced the DOS in my DISCiPLE with Uni-DOS, replacing sequential file handling (which I never used) with subdirectories (which were massively handy. I mean, nearly a megabyte of storage!)

I was never much of a gamer. I'm still not. At school, I collected and swapped games like some kids collect stamps -- the objective was to own a good collection, not to play them. I usually tried each game a couple of times, but no more. Few kept my attention: The Hobbit, the Stranglers' Aural Quest, The Valley, Jet-Pac. Some of the big hits that everyone else loved, like Manic Miner and Jet Set Willy, I hated. Irritating music, very hard gameplay, and so repetitive.

And yet now, people are so nostalgic for the terrible keyboard, they crowd-funded a new version! One of the first things I replaced, it was so bad! There are new models, new hardware, all to play the to be honest really quite bad games. Poor graphics, lousy sound on the 48. And yet everyone rhapsodises about them.

I agree that, back then, game design was more innovative and gameplay often more varied and interesting than it is today. Now, the graphics look amazing but there seem to me to be about half a dozen different basic styles of gameplay, but with different plots, visuals and soundtracks. Where is the innovation of the level of The Sentinel or Elite or Marble Madness or Q-Bert?

I have a few times played Jet-Pac in an emulator, but I am not a retro-gamer. I enjoy playing with my Toastrack, immaculately restored by Mutant Caterpillar, and my revivified LMT 68FX2, given a brain-transplant by Tynemouth Software. The things I loved about my Sinclairs seem to be forgotten now, and modern Spectrum aficionadi as nostalgic about the very things I resented -- the poor graphics and bargain-basement sound -- or replaced: the rotten keyboard. It is so weird that I can't relate to it, but hey, I'm happy that the machines still exist and that there's an active user community.

March 21, 2016 01:57 PM

March 13, 2016

Alex Bloor

Part 3: Postmortem of a Kickstarter campaign; Camsformer

This is part three! In Part 1, we looked at the campaign, its press coverage, what the project promised, and why it interested me..  In part 2 we moved onto the first seeds of doubt… Then a silence descended on … Continue reading

by Alex Bloor at March 13, 2016 07:01 PM

March 12, 2016

Alex Bloor

Part 2: Postmortem of a Kickstarter campaign; Camsformer

This is part two! In Part 1, we looked at the campaign, its press coverage, what the project promised, and why it interested me.. Now we move onto the first seeds of doubt… HOT PRODUCT OR HOT AIR? The first … Continue reading

by Alex Bloor at March 12, 2016 10:49 AM

March 08, 2016

Alex Bloor

Part 1: Postmortem of a Kickstarter campaign; CamsFormer

Kickstarter has had some wonderful successes and some terrible failures. Perhaps the most famous example of the latter is that of “Zano“, the drone that raised millions and only delivered a handful of units, which didn’t work well. On the … Continue reading

by Alex Bloor at March 08, 2016 06:40 PM

February 29, 2016

Liam Proven

Floppies and hard disks and ROMs, oh my! Or why early micros couldn't boot from HD

In lieu of real content, a repurposed FB comment, 'cos I thought it stood alone fairly well. I'm meant to be writing about containers and the FB comment was a displacement activity.

The first single-user computers started to appear in the mid-1970s, such as the MITS Altair. These had no storage at all in their most minimal form -- you entered code into their few hundreds of bytes of memory (not MB, not kB, just 128 bytes or so.)

One of the things that was radical is that they had a microprocessor: the CPU was a single chip. Before that, processors were constructed from lots of components, e.g. the KENBAK-1.

A single-user desktop computer with a microprocessor was called a microcomputer.

So, in the mid- to late-1970s, hard disks were *extremely* expensive -- thousands of $/£, more than the computer itself. So nobody fitted them to microcomputers.

Even floppy drives were quite expensive. They'd double the price of the computer. So the first mass-produced "micros" saved to audio tape cassette. No disk drive, no disk controller -- it was left out to save costs.

If the machine was modular enough, you could add a floppy disk controller later, and plug a floppy drive into that.

With only tape to load and save from, working at 1200 bits per second or so, even small programs of a few kB took minutes to load. So the core software was built into a permanent memory chip in the computer, called a ROM. The computer didn't boot: you turned it on, and it started running the code in the ROM. No loading stage necessary, but you couldn't update or change it without swapping chips. Still, it was really tiny, so bugs were not a huge problem.

Later, by a few years into the 1980s, floppy drives fell in price so that high-end micros had them as a common accessory, although still not built in as standard for most.

But the core software was still on a ROM chip. They might have a facility to automatically run a program on a floppy, but you had to invoke a command to trigger it -- the computer couldn't tell when you inserted a diskette.

By the 16-bit era, the mid-1980s, 3.5" drives were cheap enough to bundle as standard. Now, the built-in software in the ROM just had to be complex enough to start the floppy drive and load the OS from there. Some machines still kept the whole OS in ROM though, such as the Atari ST and Acorn Archimedes. Others, like the Commodore Amiga, IBM PC & Apple Macintosh, loaded it from diskette.

Putting it on diskette was cheaper, it meant you could update it easily, or even replace it with alternative OSes -- or for games, do without an OS altogether and boot directly into the game.

But hard disks were still seriously expensive, and needed a separate hard disk controller to be fitted to the machine. Inexpensive home machines like the early or basic-model Amigas and STs didn't have one -- again, it was left out for cost-saving reasons.

On bigger machines with expansion slots, you could add a hard disk controller and it would have a ROM chip on it that added the ability to boot from a hard disk connected to the controller card. But if your machine was a closed box with no internal slots, it was often impossible to add such a controller, so you might get a machine which later in its life had a hard disk controller and drive added, but the ROMs couldn't be updated so it wasn't possible to boot from the hard disk.

But this was quite rare. The 2nd ever model of Mac, the Mac Plus, added SCSI ports, the PC was always modular, and the higher-end models of STs, Amigas and Archimedes had hard disk interfaces.

The phase of machines with HDs but booting from floppy was fairly brief and they weren't common.

If the on-board ROMs could be updated, replaced, or just supplemented with extra ones in the HD controller, you could add the ability to boot from HD. If the machine booted from floppy anyway, this wasn't so hard.

Which reminds me -- I am still looking for an add-on hard disk for an Amstrad PCW, if anyone knows of such a thing!

February 29, 2016 10:36 PM

February 18, 2016

Liam Proven

Unix: the new legacy platform [tech blog post, by me]

Today, Linux is Unix. And Linux is a traditional, old-fashioned, native-binary, honking great monolithic lump of code in a primitive, unsafe, 1970s language.

The sad truth is this:

Unix is not going to evolve any more. It hasn't evolved much in 30 years. It's just being refined: the bugs are gradually getting caught, but no big changes have happened since the 1980s.

Dr Andy Tanenbaum was right in 1991. Linux is obsolete.

Many old projects had a version numbering scheme like, e.g., SunOS:

Release 1.0, r2, r3, r4...

Then a big rewrite: Version 2! Solaris! (AKA SunOS 5)

Then Solaris 2, 3, 4, 5... now we're on 11 and counting.

Windows reset after v3, with NT. Java did the reverse after 1.4: Java 1.5 was "Java 5". Looks more mature, right? Right?

Well, Unix dates from between 1970 and the rewrite in C in 1972. Motto: "Everything's a file."

Unix 2.0 happened way back in the 1980s and was released in 1991: Plan 9 from Bell Labs.

It was Unix, but with even more things turned into files. Integrated networking, distributed processes and more.

The world ignored it.

Plan 9 2.0 was Inferno: it went truly platform-neutral. C was replaced by Limbo, type-safe, compiling code down to binaries that ran on Dis, a universal VM. Sort of like Java, but better and reaching right down into the kernel.

The world ignored that, too.

Then came the idea of microkernels. They've been tried lots of times, but people seized on the idea of early versions that had problems -- Mach 1 and Mach 2 -- and failed projects such as the GNU HURD.

They ignore successful versions:
* Mach 3 as used in Mac OS X and iOS
* DEC OSF/1, later called DEC Tru64 Unix, also based on Mach
* QNX, a proprietary true-microkernel OS used widely around the world since the 1980s, now in Blackberry 10 but also in hundreds of millions of embedded devices.

All are proper solid commercial successes.

Now, there's Minix 3, a FOSS microkernel with the NetBSD userland on top.

But Linux is too established.

Yes, NextBSD is a very interesting project. But basically, it's just fitting Apple userland services onto FreeBSD.

So, yes, interesting, but FreeBSD is a sideline. Linux is the real focus of attention. FreeBSD jails are over a decade old, but look at the fuss the world is making about Docker.

There is now too much legacy around Unix -- and especially Linux -- for any other Unix to get much traction.

We've had Unix 2.0, then Unix 2.1, then a different, less radical, more conservative kind of Unix 2.0 in the form of microkernels. Simpler, cleaner, more modular, more reliable.

And everyone ignored it.

So we're stuck with the old one, and it won't go away until something totally different comes along to replace it altogether.

February 18, 2016 12:59 PM

February 14, 2016

Denesh Bhabuta

Unending Love

Snow covered Helsinki, FI               Sunday, 14 February 2016               2.10am EET

I seem to have loved you in numberless forms, numberless times…
In life after life, in age after age, forever.
My spellbound heart has made and remade the necklace of songs,
That you take as a gift, wear round your neck in your many forms,
In life after life, in age after age, forever.

Whenever I hear old chronicles of love, its age-old pain,
Its ancient tale of being apart or together.
As I stare on and on into the past, in the end you emerge,
Clad in the light of a pole-star piercing the darkness of time:
You become an image of what is remembered forever.

You and I have floated here on the stream that brings from the fount.
At the heart of time, love of one for another.
We have played along side millions of lovers, shared in the same
Shy sweetness of meeting, the same distressful tears of farewell-
Old love but in shapes that renew and renew forever.

Today it is heaped at your feet, it has found its end in you
The love of all man’s days both past and forever:
Universal joy, universal sorrow, universal life.
The memories of all loves merging with this one love of ours –
And the songs of every poet past and forever.

– With thanks to Rabindranath Tagore

by Admin at February 14, 2016 12:19 AM

February 08, 2016

Steve Kennedy

Speed-up your headless Mac Mini

The Mac Mini is Apple's smallest Mac and though it can be used as a workstation, it's often used as a server for offices/workgroups and even in datacentres. Apple even supplies software to make it function as a server (unsurprisingly called OS X Server - currently v5.0.15 is the release version and the beta variety v5.1 beta 2).

The server software supports various functions including a mail server and even remote Xcode compilations. However sometimes it's useful to remotely access the Mac Mini using Apple's remote desktop so getting a virtual screen on to the unit itself. Unfortunately if it's in headless mode, the on-board GPU is not enabled and all graphics is handled by the main CPU, which can make the system seems extremely slow as the CPU is spending it's time rendering the screen, animations and doing screen refreshes etc.

Now there is a solution to this and Newertechnology have produced an HDMI Headless Video Accelerator (t's about the same size as a small Bluetooth or WiFi adapter) that is plugged into the HDMI port and then the Mac Mini then thinks a screen is attached and thus the GPU is enabled meaning all screen handling is done by the GPU rather than the host CPU and everything runs smoothly again.

The adapter supports a maximum resolution of 1080p (and up to 3840 x 2160 on a late 2014 model). Other models supported are Mid 2010 through to the latest. OS X 10.6.8 is the earliest version of the operating supported (no drivers are required).

It can be found on-line for around £21.99. A really useful little edition if using a Mac Mini in headless mode and accessing it remotely (it's also true for using it for remote animation and anything that uses the GPU).

by Steve Karmeinsky ( at February 08, 2016 04:39 PM

February 05, 2016

Liam Proven

Why do Macs have "logic boards" while PCs have "motherboards"?

Since it looks like my FB comment is about to get censored, I thought I'd repost it...


Gods, you are such a bunch of newbies! Only one comment out of 20 knows the actual answer.

History lesson. Sit down and shaddup, ya dumb punks.

Early microcomputers did not have a single PCB with all the components on it. They were on separate cards, and all connected together via a bus. This was called a backplane and there were 2 types: active and passive. It didn't do anything except interconnect other components.

Then, with increasing integration, a main board with the main controller logic on it became common, but this had slots on it for other components that were too expensive to include. The pioneer was the Apple II, known affectionately as the Apple ][. The main board had the processor, RAM and glue logic. Cards provided facilities such as printer ports, an 80 column display, a disk controller and so on.

But unlike the older S100 bus and similar machines, these boards did nothing without the main board. So they were called daughter boards, and the one they plugged into was the motherboard.

Then came the Mac. This had no slots so there could be no daughterboards. Nothing plugged into it, not even RAM -- it accepted no expansions at all; therefore it made no sense to call it a motherboard.

It was not the only PCB in the computer, though. The original Mac, remember, had a 9" mono CRT built in. An analogue display, it needed analogue electronics to control it. These were on the Analog Board (because Americans can't spell.)

The board with the digital electronics on it -- the bits that did the computing, in other words the logic -- was the Logic Board.

2 main boards, not one. But neither was primary, neither had other subboards. So, logic board and analog board.

And it's stuck. There are no expansion slots on any modern Mac. They're all logic boards, *not* motherboards because they have no children.

February 05, 2016 05:51 PM

Andy Smith (

Your Debian netboot suddenly can’t do Ext4?

If, like me, you’ve just done a Debian netboot install over PXE and discovered that the partitioner suddenly seems to have no option for Ext4 filesystem (leaving only btrfs and XFS), despite the fact that it worked fine a couple of weeks ago, do not be alarmed. You aren’t losing your mind. It seems to be a bug.

As the comment says, downloading netboot.tar.gz version 20150422+deb8u3 fixes it. You can find your version in the debian-installer/amd64/boot-screens/f1.txt file. I was previously using 20150422+deb8u1 and the commenter was using 20150422+deb8u2.

Looking at the dates on the files I’m guessing this broke on 23rd January 2016. There was a Debian point release around then, so possibly you are supposed to download a new netboot.tar.gz with each one – not sure. Although if this is the case it would still be nice to know you’re doing something wrong as opposed to having the installer appear to proceed normally except for denying the existence of any filesystems except XFS and btrfs.

Oh and don’t forget to restart your TFTP daemon. tftpd-hpa at least seems to cache things (or maybe hold the tftp directory open, as I had just moved the old directory out of the way), so I was left even more confused when it still seemed to be serving 20150422+deb8u1.

by Andy at February 05, 2016 09:50 AM

February 04, 2016

Alex Bligh

Nominet – Sir Michael Lyons’ Review

A year or so ago, Nominet‘s board commissioned Sir Michael Lyons to perform an independent external review of Nominet’s operating model and governance arrangements. Sir Michael reported to the board in October, and the board have now released both Sir Michael’s report, and their response.

I spoke to Sir Michael briefly at the last AGM, and at his request had a longer telephone conversation. I have to say I was impressed – he seemed to have got to the heart of the issues pretty quickly. And he appears to have produced a very sensible report.

Sir Michael’s recommendations are to be found at the end of his report. In summary they are level-headed and reasonable. None are particularly radical, and I think he is probably correct that radical surgery is not needed. Nominet appear to have accepted most of them, although I do find it a little strange that they don’t accept the need for finance director on the board of a company Nominet’s size.

Given I agree with almost everything Sir Michael has written, I’m not going to pick the report apart in full here. But I will mention two details.


Firstly, Sir Michael suggests (page 19):

Introduce clear KPIs for cost control and return on Research & Development

I think introducing KPIs is a valuable strategy, particularly regarding cost control. However, I question the extent of the usefulness of introducing KPIs for ‘return on Research and Development’. Long term R&D produces returns only over the long term; by then it’s too late to control the cost of that R&D. I think maintaining a close eye on what is researched is probably even more important than what it costs. Moreover, in any R&D environment it should be an expected result that whilst some projects will produce transformative commercial successes, some (perhaps most) projects do not come to commercial fruition; accepting that this is an inevitability and not a failure is vital, not least as otherwise staff have the perverse incentive to carry on with such projects.

In his recommendations section, this has been tempered to:

Recommendation 18: Nominet should make public the KPIs by which it holds the executive to account reflecting at the minimum registry costs and progress with diversification

which I believe is a better view of things.

The Nominet Trust and Nominet’s public purpose

Sir Michael makes a number of wise remarks about the Nominet Trust and Nominet’s public purpose. Here are a couple of quotes:

From page 9:

It is not enough to argue that Nominet fulfils its wider public purposes by making a profit or that the bigger that profit, the bigger the social benefit. Nor, for that matter, that it meets its social responsibilities by donating some, or all, of its profit to charitable purposes.

From page 12:

However, there is one point that I would like to underline and that is the importance of taking a wide view of social benefit and so avoid focusing solely on welfare benefits. There may be a danger that this has marked the early days of the Nominet Trust, where the board appears to have put an emphasis on separation and independence for the new Trust (both important issues for charitable status to be secured) but, perhaps, inadequate consideration of purpose.
Much of what the Trust has undertaken appears to be valued by the beneficiaries and other commentators but does not appear to be widely understood, or valued, by the membership. In part, this may be remedied by clearer communications in the future, and that is certainly on the agenda, but I believe it may also offer some lessons for the definition of the company’s wider purposes. Lessons, in terms of both the importance of clearly-defined purposes but also of ensuring that they are based on a wide view of social benefit. Most crucially, the interest of the original founders in establishing Nominet as a company capable of contributing to the further development of the internet was, itself, a clear purpose of social benefit. Whilst I believe that objective now needs to be revisited and, perhaps, broken down with a set of purposes reflecting the company’s current understanding of the internet and the wider digital economy, I strongly encourage the board to give weight to objectives which offer economic as well as social benefits. Not least, because these are likely to be more appealing to the membership.

I think Sir Michael has these points exactly correct, though as they did not find their way into a recommendation, they board did not respond to them. Donating money to the Nominet Trust is laudable, but does not mean that by doing so Nominet has automatically achieved its public purpose solely by doing this; public purpose should run through its operations. Similarly, Nominet sometimes appears to want to wash its hands of the money once donated (perhaps in order to ensure the Nominet Trust appears to be independent); whilst I agree that Nominet should not involve itself in day to day decisions of the Nominet Trust it should ensure that the Nominet Trust is applying its funds in a manner consistent with Nominet’s own public purpose. Funding more (charitable) projects directly related to internet infrastructure, for instance, would not go amiss.

by Alex Bligh at February 04, 2016 05:57 PM

February 01, 2016

Aled Treharne (ThinkSIP)

Hurting your partners the Basecamp way

I was recently pointed towards this thread on github which I read with growing horror as the thread developed.

Now, fair disclosure – I’ve never really liked Basecamp. In the words of an old friend, they appear to have confused “simple” with “simplistic” and released a product that left me in the position where I always wished it did more than it did. As a result, I’ve only ever used it when customers require it for their projects.

However, the discussion on that thread isn’t related to the product per se, but rather the release of the new version of Basecamp. As developers occasionally need to do from time to time, they’ve rewritten the product. This does occasionally happen when you reach a point where design decisions were made on assumptions that are no longer true – often due to growth. Fact of life. No problems here.

The problem comes in how Basecamp have approached their partners – Basecamp is at its heart an end-user-focused application and some of the principles outlined in the book that they released back in 2006 hold well with that ethos. The problem they have is that along the way, they have taken a product decision to allow integration with third party apps via their API – so when they released v3 of Basecamp and focused around their users, they left their partners in the dark.

My biggest problem with this whole situation is that Basecamp could easily have avoided this if they’d clearly communicated with their partners – a group of people who Basecamp have to engage with in a singularly different way to their end users. Had they said, back in November, “hey, look, v3 is coming out but because it’s a rewrite the API will be different and not backwards compatible, so you need to tell your users that” then I’m sure the partners would have been unhappy but could do something about it. Even identifying which version a user was using based on the old API would have been useful so that at least the third party app could pop up an alert.

Instead Basecamp have strung their partners along for the ride for several months, all the time promising an API “real soon, now!”. As a result, end users who felt, like me, that the product needed extras and used third parties who used the API to implement those extras are now stuck between a rock and a hard place. Companies who wrote integrations are facing a real problem, especially if they’re small shops whose business models rely on this integration.

Maybe I spend too much of my time in a world where APIs, resilience and reliability are “table stakes”. Maybe I’ve been spoiled by companies who place the importance of integration front and centre of their product strategy. Maybe I’m too used to companies who understand how their users use and perceive their products and are willing to communicate clearly to those “users”, whether they’re end-users or third party integrators.

Maybe, once again, I’m just expecting too much from Basecamp.

by Aled Treharne at February 01, 2016 02:08 PM

January 30, 2016

Liam Proven

Fallen giants - comparing the '80s second-generation home computers

A friend of mine who is a Commodore enthusiast commented that if the company had handled it better, the Amiga would have killed the Apple Mac off.

But I wonder. I mean, the $10K Lisa ('83) and the $2.5K Mac ('84) may only have been a year or two before the $1.3K Amiga 1000 ('85), but in those years, chip prices were plummeting -- maybe rapidly enough to account for the discrepancy.

The 256kB Amiga 1000 was half the price of the original 128kB Mac a year earlier.

Could Tramiel's Commodore have sold Macs at a profit for much less? I'm not sure. Later, yes, but then, Mac prices fell, and anyway, Apple has long been a premium-products-only sort of company. But the R&D process behind the Lisa & the Mac was long, complex & expensive. (Yes, true, it was behind the Amiga chipset, too, but less so on the OS -- the original CAOS got axed, remember. The TRIPOS thing was a last-minute stand-in, as was Arthur/RISC OS on the Acorn Archimedes.)

The existence of the Amiga also pushed development of the Mac II, the first colour model. (Although I think it probably more directly prompted the Apple ][GS.)

It's much easier to copy something that someone else has already done. Without the precedent of the Lisa, the Mac would have been a much more limited 8-bit machine with a 6809. Without the precedent of the Mac, the Amiga would have been a games console.

I think the contrast between the Atari ST and the Sinclair QL, in terms of business decisions, product focus and so on, is more instructive.
The QL could have been one of the imporant 2nd-generation home computers. It was launched a couple of weeks before the Mac.
But Sinclair went too far with its hallmark cost-cutting on the project, and the launch date was too ambitious. The result was a 16-bit machine that was barely more capable than an 8-bit one from the previous generation. Most of the later 8-bit machines had better graphics and sound; some (Memotech, Elan Enterprise) as much RAM, and some (e.g. the SAM Coupé) also supported built-in mass storage.
But Sinclair's OS, QDOS, was impressive. An excellent BASIC, front & centre like an 8-bit machine, but also full multitasking, modularity so it readily handled new peripherals -- but no GUI by default.
The Mac, similarly RAM deprived and with even poorer graphics, blew it away. Also, with the Lisa and the Mac, Apple had spotted that the future lay in GUIs, which Sinclair had missed -- the QL didn't get its "pointer environment" until later, and when it did, it was primitive-looking. Even the modern version is:

Atari, entering the game a year or so later, had a much better idea where to spend the money. The ST was an excellent demonstration of cost-cutting. Unlike the bespoke custom chipsets of the Mac and the Amiga, or Sinclair's manic focus on cheapness, Atari took off-the-shelf hardware and off-the-shelf software and assembled something that was good enough. A decent GUI, an OS that worked well in 512kB, graphics and sound that were good enough. Marginally faster CPU than an Amiga, and a floppy format interchangeable with PCs.
Yes, the Amiga was a better machine in almost every way, but the ST was good enough, and at first, significantly cheaper. Commodore had to cost-trim the Amiga to match, and the first result, the Amiga 500, was a good games machine but too compromised for much else.

The QL was built down to a price, and suffered for it. Later replacement motherboards and third-party clones such as the Thor fixed much of this, but it was no match for the GUI-based machines.

The Mac was in some ways a sort of cut-down Lisa, trying to get that ten-thousand-dollar machine down to a more affordable quarter of the price. Sadly, this meant losing the hard disk and the innovative multitasking OS, which were added back later in compromised form -- the latter cursed the classic MacOS until it was replaced with Mac OS X at the turn of the century.

The Amiga was a no-compromise games machine, later cleverly shoehorned into the role of a very capable multimedia GUI coomputer.

The ST was also built down to a price, but learned from the lessons of the Mac. Its spec wasn't as good as the Amiga, its OS wasn't as elegant as the Mac, but it was good enough.

The result was that games developers aimed at both, limiting the quality of Amiga games to the capabilities of the ST. The Amiga wasn't differentiated enough -- yes, Commodore did high-end three-box versions, but the basic machines remained too low-spec. The third-generation Amiga 1200 had a faster 68020 chip which the OS didn't really utilise, it had provision for a built-in hard disk which was an optional extra. AmigaOS was a pain to use with only floppies, like the Mac -- whereas the ST's ROM-based OS was fairly usable with a single drive. A dual-floppy-drive Amiga was the minimum usable spec, really, and it benefited hugely from a hard disk -- but Commodore didn't fit one.

The ST killed the Amiga, in effect. By providing an experience that was nearly as good in the important, visible ways, Commodore had to price-cut the Amiga to keep it competitive, hobbling the lower-end models. And as games were written to be portable between them both without too much work, they mostly didn't exploit the Amiga's superior abilities.

Acorn went its own way with the Archimedes -- it shared almost no apps or games with the mainstream machines, and while its OS is still around, it hasn't kept up with the times and is mainly a curiosity. Acorn kept its machines a bit higher-end, having affordable three-box models with hard disks right from the start, and focused on the educational niche where it was strong.

But Acorn's decision to go its own way was entirely vindicated -- its ARM chip is now the world's best-selling CPU. Both Microsoft and Apple OSes run on ARMs now. In a way, it won.

The poor Sinclair QL, of course, failed in the market and Amstrad killed it off when it was still young. But even so, it inspired a whole line of successors -- the CST Thor, the ICL One-Per-Desk (AKA Merlin Tonto, AKA Telecom Australia ComputerPhone), the Qubbesoft Aurora replacement main board and later the Q40 and Q60 QL-compatible PC-style motherboards. It had the first ever multitasking OS for a home computer, QDOS, which evolved into SMSQ/e and moved over to the ST platform instead. It's now open source, too.

And Linus Torvalds owned a QL, giving him a taste for multitasking so that he wrote his own multitasking OS when he got a PC. That, of course, was Linux.

The Amiga OS is still limping along, now running on a CPU line -- PowerPC -- that is also all but dead. The open-source version, AROS, is working on an ARM port, which might make it slightly more relevant, but it's hard to see a future or purpose for the two PowerPC versions, MorphOS and AmigaOS 4.

The ST OS also evolved, into a rich multitasking app environment for PCs and Macs (MagiC) and into a rich multitasking FOSS version, AFROS, running on an emulator on the PC, Aranym. A great and very clever little project but which went nowhere, as did PC GEM, sadly.

All of these clever OSes -- AROS, AFROS, QDOS AKA SMSQ/E. All went FOSS too late and are forgotten. Me, I'd love Raspberry Pi versions of any and all of them to play with!

In its final death throes, a flailing Atari even embraced the Transputer. The Atari ABAQ could run Parhelion's HELIOS, another interesting long-dead OS. Acorn's machines ran one of the most amazing OSes I've ever seen, TAOS, which nearly became the next-generation Amiga OS. That could have shaken up the industry -- it was truly radical.

And in a funny little side-note, the next next-gen Amiga OS after TAOS was to be QNX. It didn't happen, but QNX added a GUI and rich multimedia support to its embedded microkernel OS for the deal. That OS is now what powers my Blackberry Passport smartphone. Blackberry 10 is now all but dead -- Blackberry has conceded the inevitable and gone Android -- but BB10 is a beautiful piece of work, way better than its rivals.

But all the successful machines that sold well? The ST and Amiga lines are effectively dead. The Motorola 68K processor line they used is all but dead, too. So is its successor, PowerPC.

So it's the two niche machines that left the real legacy. In a way, Sinclair Research did have the right idea after all -- but prematurely. It thought that the justification for 16-bit home/business computers was multitasking. In the end, it was, but only in the later 32-bit era: the defining characteristic of the 16-bit era was bringing the GUI to the masses. True robust multitasking for all followed later. Sinclair picked the wrong feature to emphasise -- even though the QL post-dated the Apple Lisa, so the writing was there on the wall for all to see.

But in the end, the QL inspired Linux and the Archimedes gave us the ARM chip, the most successful RISC chip ever and the one that could still conceivably drive the last great CISC architecture, x86, into extinction.

Funny how things turn out.

January 30, 2016 06:37 PM

January 25, 2016

Steve Kennedy

Techstars London opens applications for next cohort

Techstars has opened applications for their 5th London program which will run from June 20th with the demo day taking place in September.

They will be accepting 10 to 12 teams and interested companies should apply on-line through F6s.

Techstars has some great mentors (there may be some bias here) and some great companies have come out of the program. It's progressed a lot since Springboard days.

by Steve Karmeinsky ( at January 25, 2016 09:31 PM

The Gadget Show Live show returns to the NEC

The Gadget Show Live once again returns to the NEC in Birmingham on 31st March to the 3rd April 2016.

Channel 5 Gadget Show presenters Jason Bradbury, Jon Bentley, Ortis Deeley and Amy Williams will be on the stage and this year a TV episode will be filmed giving members of the public a chance to appear on the show on TV.

There will be 5 areas (including the main stage): -

  • Better Life - Products that can help people or are beautiful in the home
  • Power Up - technology to power their lives which is anything from wearables, fitness devices and in-car kit
  • The Lab - Future/inspirational tech
  • The Arcade - which is all about gaming

Tickets are available on-line and cost

Child (Thurs) £9.99
Adult (Thurs) £16.99
Child (Fri, Sat, Sun) £11.99
Adult (Friday, Sat, Sun) £18.99

by Steve Karmeinsky ( at January 25, 2016 08:41 PM

January 23, 2016

Alex Smith

Preventing hotlinking using CloudFront WAF and Referer Checking

When you run sites with shareable content, hotlinking becomes a common problem. There are several ways to address this – such as validating the Referer header in your webserver and either issuing a redirect or returning a 403 Forbidden.

If you’re also using a CDN, this becomes less practical, as the CDN stores a copy of your content near the edge, even if your webserver validates the original request’s headers, further requests for that content need to be validated by the CDN itself.

CloudFront now supports this using the WAF Feature – and its ability to match and filter based on Headers; this post will cover how to prevent hotlinking using this feature.



Firstly, to cover off various terms. WAF configurations consist of an ACL, which is associated to a given CloudFront distribution. Each ACL is a collection of one or more rules, and each rule can have one or more match conditions. Match conditions are made up of one or more filters, which inspect the request (e.g. Headers, URI) to match for certain conditions.

WAF Setup

The setup is actually pretty straightforward. In this case for simplicity we’re assuming that the images/static files/etc are separated on to another subdomain, so we only need to validate the header. We create a WAF ruleset containing a single rule, with a single match condition, made up of a single filters. That match condition looks at the ‘Referer’ (sic) header and verifies it ends with one or more values. If the rule is matched, the traffic is allowed. Otherwise, the traffic is blocked by the default rule on the WAF. Below, I’ve covered off how to do this via the console for ease, but at some point I’ll update this to include using the CLI.

Step 0: Determine what needs to be protected. In this case, we’re going to block hotlinking for any files under that doesn’t have a Referer ending with

Step 1: Create a new Web ACL
Create ACL

Step 2: Create a new String Match Condition with a filter matching on Referer. This will match for anything ending ‘’, to allow us to hot link from other sites under our own domain. If you need to be more secure (e.g. prevent someone registering and using that to hotlink your content), you can have additional match conditions for only valid Refers using ‘Exactly matches’.

Step 3: Create a new rule, and add the specified String Match Condition. Once created, set this new rule to ‘Allow’, and the Default Action to ‘Block’. If you want to test this only, you can set the rule action to ‘Count’ and the default rule to ‘Allow’.


Step 4: Associate this to the relevant CloudFront distribution, and test.

The Result

Now when we request files without the relevant header, they’re blocked at the CDN, and valid requests are still allowed through.

» curl -I
HTTP/1.1 403 Forbidden

» curl -H "Referer:" -I
HTTP/1.1 200 OK

Next Steps

As mentioned, this only works for cases where content is under a separate (sub-)domain. This is due to the AWS WAF not currently being able to match negatives, however this still allows you to protect your content without the need for application modification. In cases where you can modify the application easily, there are other ways to protect your content, such as Signed URLs.

by alexjs at January 23, 2016 07:08 AM

January 15, 2016

Steve Kennedy

Ofcom publishes regulations for 'TV whitespace' tech

Ofcom, the Super Regulator, (in December) published the new regulations for TV Whitespace technology which came into force on the 31st December 2015 allowing equipment that meets the regulations to operate on a license exempt basis.

In the new digital era of terrestrial TV, there are digital multiplexes across the UK, these multiplexes use different channels, so neighbouring transmitters don't interfere with each other, which means there is a lot of potentially unused spectrum in a particular area. Multiplex sit in the UHF band which covers 470 - 790 MHz.

In order to avoid interference with existing (licensed) spectrum users, devices will need to communicate with databases which apply rules, set by Ofcom, to put limits on the power levels and frequencies at which devices can operate. There is also a 'kill switch' function whereby the database can tell a device to stop operating completely if interference is found to be occurring.

The UHF TV band is currently allocated for use by Digital Terrestrial Television (DTT) broadcasting and Programme Making and Special Events (PMSE). Currently, Freeview TV channels are broadcast using up to ten multiplexes. Each multiplex requires an 8 MHz channel. Multiplexes are transmitted at different frequency channels across the country in the frequency range 470 to 790MHz.

Whilst a total of 32 channels each 8 MHz wide are reserved for DTT in the UK, normally only one channel per multiplex is used at any given location. In other words, the majority of channels are unused for DTT transmission at any given location. This is required because high-power TV broadcasts using the same frequency need geographic separation between their coverage areas to avoid interference.

The channels that are not used by DTT at any given location can be used by lower- power devices on an opportunistic basis. This opportunistic access to interleaved spectrum is not new. Programme making and special events (PMSE) equipment such as radio microphones and audio devices have been exploiting the interleaved spectrum for a number of years, and Ofcom issues more than 50,000 assignments annually for this type of use.

Ofcom refer to the spectrum that is left over by DTT (including local TV) and PMSE use as TV White Spaces (TVWS). By this we mean the combination of locations and frequencies in the UHF TV band that can be used by new users, operating in accordance with technical parameters that ensure that there is a low probability of harmful interference to DTT reception, PMSE usage or services above and below the band.

The following organisations have signed contracts and completed qualification to run the white space databases (WSDB): -

The 'master' devices that talk to the databases should report their height, if they don't the database will use a use conservative default values for the purpose of calculation of operational parameters i.e. it will use height values that would result in operational parameters that are equal or more restrictive than they would be had the device reported its height.

Though the regulations do not specify an update time (for master devices to communicate to the databases), Ofcom has stated a maximum time of 15 minutes which strikes an appropriate balance between the need to be able to act quickly in the event of interference and limiting the practical burden on databases of maintaining frequent communications with potentially large numbers of devices. This may be revised if found to be unsuitable.

The WSD Regulations apply to the United Kingdom and the Isle of Man. They do not extend to the Channel Islands.

A master device is a device which is capable of communicating with and obtaining operational parameters from a database for the purpose of transmitting within the frequency band 470 MHz to 790 MHz.

A slave device is a device which is capable of transmitting within the frequency band 470 MHz to 790 MHz after receiving slave operational parameters from a master device.

Type A equipment as equipment which has an integral antenna, a dedicated antenna or an external antenna11 and is intended for fixed location use only.

Type B equipment as equipment which has a dedicated antenna or an integral antenna and is not intended for fixed location use.

WSDs must not be used airborne.

WSDs must be configured in such a way that a user is unable to input, reconfigure or alter any technical or operational settings or features of a device in a way which (i) would alter the technical characteristics of the device which are communicated to a database (this includes the master and slave device characteristics), or (ii) would cause the device to operate other than in accordance with master operational parameters or slave operational parameters, as applicable. An example of (ii) would be the antenna gain. If this parameter is set to be smaller than the actual gain of the antenna, then the device could radiate at a higher level than the limit communicated by the WSDB.

A master device:

  • must be able to determine its location
  • must provide device parameters (defined now as its ‘master device characteristics’) to a database, in order to obtain operational parameters from the database. The device parameters include the location and the technical characteristics of the device listed below. The operational parameters indicate to the device the channels and power levels that it can use, together with other constraints.
  • must only transmit in the UHF TV band after requesting and receiving operational parameters from, and in accordance with, operational parameters provided by a database
  • must apply the simultaneous operation power restriction (described at paragraph 3.23 above), if it operates on more than one DTT channel simultaneously and the master operational parameters indicate that this restriction applies
  • must report back to the database the channels and powers that the WSD intends to use – the channel usage parameters – and operate within those channels and powers.
In addition, where its operational parameters stop being valid, a master device must tell slave devices that are connected to it to stop transmitting and must stop transmitting itself. The operational parameters stop being valid if:
  • a database instructs the master device that the parameters are not valid
  • a master device cannot verify, according to the update procedure, that the operational parameters are valid.

In order to support more WSDBs Ofcom also intend to publish on our website a machine-readable version of that list on a website hosted by Ofcom so that it can be selected by a WSD through a process known as “database discovery”. Ofcom would expect that list to include those database operators which have informed Ofcom that they are ready to start providing services to white space devices.

It is interesting that Sony is moving into this space, which probably means they will start producing equipment that uses white space technology for short range communication, such as say a PS4 to its peripherals.

by Steve Karmeinsky ( at January 15, 2016 04:29 PM

Intel Edison, jack of all trades, but maybe master of none

The Intel Edison is a small system-on-chip (SoC) that measures about 35.5 × 25.0 × 3.9 mm (on its carrier PCB) which has a connector on it allowing it to be plugged into other things (it is possible to get the SoC on just the PCB without the edge connector).

The SoC board can then be plugged on to various boards from Intel, one is a breakout board which exposes various pins and has some USB sockets, there's also an Arduino compatible PCB allowing Arduino shields to be used.

The Edison tries to be everything to everyone, but doesn't always succeed. It actually has two processors inside, a dual thread dual core Atom running at 500MHz and a Quark 32 bit micro-controller running at 100MHz. The Atom runs Yocto Linux and the Quark a Real-time Operating System (RTOS).

It has 1GB of RAM and 4GB of Flash, 802.11 a/b/g/n WiFi and Bluetooth 4.0

There's a total of 40 I/O pins that can be configured to be: -

  • SD card - 1 interface
  • UART - 2 controllers, 1 with full flow control
  • I2C - 2 controllers
  • SPI - 1 controller with 2 chip selects
  • I2S - 1 controller
  • GPIO - 12 with 4 capable of PWM
  • USB 2.0 - 1 OTG controller
  • Clock output - 32 kHz, 19.2 MHz

Intel provide multiple ways of programming the system: -

  • Arduino IDE (v1.6+, no longer requires an Intel specific build)
  • Eclipse supporting: C, C++, and Python
  • Intel XDK supporting: Node.JS and HTML5

There are other environments that also support Edison (in Arduino or direct mode) such as the node.js Johnny-Five system. Node-red can also be installed directly on the Edison and accessed through its web server. Google's Brillo is also an option now.

Running Linux does have benefits if you're into Linux environments as there's lots of packages that can be downloaded for it or indeed built as required.

You'll either love or hate Intel's development environment (XDK).

Integrating Edison into your own projects does give you a lot of flexibility, though the power requirements aren't as low as some other Arduino types (but by the time shields have been added to give the same functionality, power requirements increase with them). In theory it is possible to put the Atom to sleep and have the Quark micro controller do background non CPU intensive tasks and then it can wake the Atom up to do some hard processing or data transfers through WiFi say, but it's not meant to be 'easy' to actually implement.

The basic Edison (just the board) is around £42, on the small breakout board it's about £72 and on the Arduino base it's £96 though on-line pricing varies.

Overall the Edison really does tries to be everything to everyone and it's a pretty powerful computer (well 2), but it may be too generic for lots of things and the variety of programming modes etc can be confusing.

by Steve Karmeinsky ( at January 15, 2016 03:30 PM

January 09, 2016

Liam Proven

Information about the Oberon language & OS, & its descendants AOS & Bluebottle

I recently received an email from a reader -- a rare event in itself -- following my recent Reg article about educational OSes:

They asked for more info about the OS. So, since there's not a lot of this about, here is some more info about the Oberon programming language, the Oberon operating system written in it, and the modern version, AOS.

The homepage for the FPGA OberonStation went down for a while. Perhaps it was the interest driven by my article. ;-)

It is back up again now, though:

The Wikipedia page is a good first source for info on the 2 Oberons, the OS:

... and the programming language:

Prof. Wirth worked at ETH Zurich, which has a microsite about the Oberon project:

There is a native port for x86 PCs. I have this running under VirtualBox.

There's a good overview here:

And the Oberon book is online here:

Development did not stop on the OS after Prof Wirth retired. It continued and became AOS. This has a different type of GUI called a Zooming UI. The AOS zooming UI is called "Bluebottle" and sometimes newer versions of the OS are thus referred to as Bluebottle.

There is a sort of fan page dedicated to AOS here:

January 09, 2016 03:42 PM

January 07, 2016

Liam Proven

The big choice: privacy or convenience [tech blog post, by me]

So a regular long-term member of one of the Ubuntu lists is saying that they don't trust Google to respect their privacy. This from someone who runs Opera 12 (on Ubuntu with Unity) because they had not noticed it had been updated... for three years.

I realise that I could have put this better, but...

As is my wont, I offered one of my favourite quotes:

Scott McNeally, CEO and co-founder of Sun Microsystems, said it best.

He was put on a panel on internet security and privacy, about 20y ago.

Eventually, they asked the silent McNeally to say something.

He replied:

"You have no privacy on the Internet. Get over it."

He was right then and he's right now. It's a public place. It's what it's for. Communication, sharing. Deal with it.

Run current software, follow best-practice guidelines from the like of SwiftOnSecurity on Twitter, but don't be obsessive about it, because it is totally pointless.

You CANNOT keep everything you do private and secure and also use the 21st century's greatest communications tool.

So you choose. Use the Internet, and stop panicking, or get off it and stay off it.

Your choice.

Modern OSes and apps do "phone home" about what you're doing, yes, sure.

This does not make them spyware.

You want better software? You want things that are more reliable, more helpful, more informative?


Then stop complaining and get on with life.

No? You want something secure, private, that you can trust, that you know will not report anything to anyone?

Then go flash some open-source firmware onto an old Thinkpad and run OpenBSD on it.

There are ways of doing this, but they are hard, they are a lot more work, and you will have a significantly degraded experience with a lot of very handy facilities lost.

That is the price of privacy.

And, listen, I am sorry if this is not what you want to hear, but if you are not technically literate enough to notice that you're running a browser that has been out of date for 3 years, then I think that you are not currently capable of running a really secure environment. I am not being gratuitously rude here! I am merely pointing out facts that others will be too nervous to do.

You cannot run a mass-market OS like Windows 10, Mac OS X or Ubuntu with Unity and have a totally secure private computer.

You can't. End of. It's over. These are not privacy-oriented platforms.

They do exist. Look at OpenBSD. Look at Qubes OS.

But they are hard work and need immense technical skill -- more than I have, for instance, after doing this stuff for a living for nearly 30y. And even then, you get a much poorer experience, like a faster 1980s computer or something.

As it is, after being on my CIX address for 25 years and my Gmail address for 12, all my email goes through Gmail now -- the old address, the Hotmail and Yahoo spamtraps, all of them. I get all my email, contacts and diary, all in one place, on my Mac and on both my Linux laptops and on both my Android and Blackberry smartphones. It's wonderful. Convenient, friendly, powerful, free, cross-platform and based on FOSS and compatible with FOSS tools.

But it means I must trust Google to store everything.

I am willing to pay that price, for such powerful tools for no money.

I am a trained Microsoft Exchange admin. I could do similar with Office 365, but I've used it, and it's less cross-platform, it's less reliable, it's slower, the native client tools are vastly inferior and it costs money.

Nothing much else could do this unless I hosted my own, which I am technically competent to do but would involve a huge amount of work, spending money and still trusting my hosting provider.

You have a simple choice. Power and convenience and ease, or, learning a lot more tech skills and privacy but also inconvenience, loss of flexibility and capability and simplicity.

You run a closed-source commercial browser on what [another poster] correctly points out is the least-private Linux distro that there is.

You have already made the choice.

So please, stop complaining about it. You chose. You are free to change your mind, but if you do, off to OpenBSD you go. Better start learning shell script and building from source.

January 07, 2016 03:18 PM

December 12, 2015

Andy Smith (

Disabling the default IPMI credentials on a Supermicro server

In an earlier post I mentioned that you should disable the default ADMIN / ADMIN credentials on the IPMI controller. Here’s how.

Install ipmitool

ipmitool is the utility that you will use from the command line of another machine in order to interact with the IPMI controllers on your servers.

# apt-get install ipmitool

List the current users

$ ipmitool -I lanplus -H -U ADMIN -a user list
ID  Name             Callin  Link Auth  IPMI Msg   Channel Priv Limit
2   ADMIN            false   false      true       ADMINISTRATOR

Here you are specifying the IP address of the server’s IPMI controller. ADMIN is the IPMI user name you will use to log in, and it’s prompting you for the password which is also ADMIN by default.

Add a new user

You should add a new user with a name other than ADMIN.

I suppose it would be safe to just change the password of the existing ADMIN user, but there is no need to have it named that, so you may as well pick a new name.

$ ipmitool -I lanplus -H -U ADMIN -a user set name 3 somename
$ ipmitool -I lanplus -H -U ADMIN -a user set password 3
Password for user 3:
Password for user 3:
$ ipmitool -I lanplus -H -U ADMIN -a channel setaccess 1 3 link=on ipmi=on callin=on privilege=4
$ ipmitool -I lanplus -H -U ADMIN -a user enable 3

From this point on you can switch to using the new user instead.

$ ipmitool -I lanplus -H -U somename -a user list
ID  Name             Callin  Link Auth  IPMI Msg   Channel Priv Limit
2   ADMIN            false   false      true       ADMINISTRATOR
3   somename         true    true       true       ADMINISTRATOR

Disable ADMIN user

Before doing this bit you may wish to check that the new user you added works for everything you need it to. Those things might include:

  • ssh to somename@
  • Log in on web interface at
  • Various ipmitool commands like querying power status:
    $ ipmitool -I lanplus -H -U somename -a power status
    Chassis power is on

If all of that is okay then you can disable ADMIN:

$ ipmitool -I lanplus -H -U somename -a user disable 2

If you are paranoid (or this is just the first time you’ve done this) you could now check to see that none of the above things now work when you try to use ADMIN / ADMIN.

Specifying the password

I have not done so in these examples but if you get bored of typing the password every time then you could put it in the IPMI_PASSWORD environment variable and use -E instead of -a on the ipmitool command line.

When setting the IPMI_PASSWORD environment variable you probably don’t want it logged in your shell’s history file. Depending on which shell you use there may be different ways to achieve that.

With bash, if you have ignorespace in the HISTCONTROL environment variable then commands prefixed by one or more spaces won’t be logged. Alternatively you could temporarily disable history logging with:

$ set +o history
$ sensitive commend goes here
$ set -o history # re-enable history logging

So anyway…

$     export IPMI_PASSWORD=letmein
$ # ^ note the leading spaces here
$ # to prevent the shell logging it
$ ipmitool -I lanplus -H -U somename -E power status
Chassis Power is on

by Andy at December 12, 2015 12:34 AM

December 11, 2015

Alex Bligh

On Nominet’s price rise

Nominet has announced that it is to increase its prices for UK domain names.

The announcement states in essence that prices will rise from a minimum of GBP 2.50 per year per domain (i.e. GBP 5.00 for two years – the same per annum for longer periods) to a minimum of GPB 3.75 per year per domain, which is a 50% price rise (assuming one was previously renewing each two years). Nominet note that the price hasn’t changed since 1999, so this is equivalent by my calculation to a (compound) 6% per year price rise. The cost increase is then potentially reduced by new co-marketing programmes. Note that the one year registration price was already GPB 3.50 per year, but that’s a relatively new introduction; if you were renewing domains this way, the price increase is smaller.

I’ve been asked what I think about this, and specifically I’ve been asked to sign this petition, which (as far as I can tell) is calling for an EGM of the company to vote on the price changes and some form of consultation. I’m against the former, but in favour in principle of the latter (for the reasons set out below), but as such I won’t be signing the petition.

Those hardest hit by the price rise are those maintaining large portfolios of domain names where the domain names fees are a high percentage of their cost base. Most ‘normal’ domain name registrants won’t give two hoots if the price of their domain name increases by GPB 1.25 a year, or even five times that. But those whose business relies on keeping these portfolios in order to speculatively sell a fraction of them, or to attract traffic (and thus ad revenue), are going to be affected significantly. Let’s call this group of people “domainers” (although some don’t like that title). The EGM petition appears to have been started by domainers, and signed by many domainers. In many quarters of the industry, domainers are not a popular group. My personal view is that it’s a legitimate business model (if not one I want to be involved in) provided IPR is not infringed, no consumers are deliberately confused, no animals hurt during filming etc.; but others have different views.

Nominet’s handling of this issue has been a mess. However, so poor has Nominet’s handling of this issue been that it has succeeded in getting several people to sign this petition who normally would have nothing to do with domainers.

Here’s what I think and why (skip to the end for a summary):

  1. Prima facie, Nominet should have the right (somehow) to change its prices. It’s not reasonable to expect a supplier to maintain the same prices ad infinitum. The question is how.

  2. Those who construct business models which rely on a single supplier for a huge percentage of their cost base, where that supplier has the freedom to change its prices, need to educate themselves on business risk. In this instance, they can renew at the old price for up to ten years (until the new prices come into effect), which will clearly have a cash cost. However, this was a risk that should have been evident from the point Nominet began (certainly since 1999). I’m afraid I have no special sympathy here.

  3. All of the justification for this price rise appears to have looked at supply-side issues, i.e. how much it costs Nominet to register a domain. Let’s briefly look into that. As far as I can tell (and as I raised at their last AGM) their average cost and marginal cost per domain appear both to have risen reasonable substantially since I was on the board many years ago. Whilst I accept that there must have been some inflation pressure (e.g. in wages), and the need to maintain an infrastructure handling more load, technology prices have fallen and processes should have been automated. The latter point is why the average wage at Nominet should have (and has) risen; because it should be employing fewer (relatively highly paid) people designing automated systems, not an army of (relatively low paid) administrators doing things manually.

  4. However, despite Nominet’s emphasis on the above, I suspect the real issue is that buried within the accounts are the costs of doing lots of things that do not directly involve .uk registrations. Nominet is attempting to diversify. This might be good, or it might be bad. But Nominet should have been clear as to how much of the increased cost is going towards expenditure in servicing .uk domain names, and how much for other purposes such as increased costs elsewhere (e.g. diversification), or building up reserves (increased revenue without increased costs). Nominet hasn’t published any figures, so we don’t know.

  5. The unexamined side of the equation is demand-side. As far as I can tell, Verisign’s wholesale price is $7.85 per annum (GBP 5.85 per year), and that’s for a thin registry (where the registry provides far fewer services, and the registrar far more). Clearly on this basis Nominet’s prices are and will remain well below market level. It would thus seem that Nominet is providing a fuller product (perhaps a better product) at a far keener price than its main competitor, despite have fewer economies of scale. And it is a product generally loved in its target market (the UK). Why on earth Nominet didn’t use this as the centre-point of its argument, I don’t know.

  6. I don’t think as a general principle price changes should have to be put to a vote of members. This is how things were (for a while) whilst Clause 19A (the ‘Hutty Clause’) was incorporated into Nominet’s articles. I was and remain in favour of its removal. Having members vote on every price change encourages the perception that Nominet is some form of cartel, and fetters the discretion of the directors. It also makes changing prices an unnecessarily difficult business, meaning it is hard for Nominet to respond to changes in the market place (arguably this can’t have been too much of a worry given the number of years without a price change since it was removed). But Nominet is a commercial organisation, not a golf club, and therefore its pricing should be set by its management.

  7. However, there is the question of how the management should set the prices, i.e. what objective are they attempting to achieve? Verisign is a public company, and its directors set prices to maximise profit in the long term. Nominet cannot distribute its profit to shareholders, so how should it set its prices? Should it too maximise profits? For many years the principles were long term cashflow neutrality, long term P&L neutrality, and maintaining a sufficient reserve for legal challenges and market downturn; these were called ‘the Bligh principles’, because (cough) I came up with them, and they seem to have survived a long while, for better or for worse. Prices were then meant to be set to accord with these principles. Some would argue the principles are still relevant, some would argue they have problems (I have a foot in both camps). But the point is that there were transparent principles that everyone knew about, and if they didn’t agree on them, well, they didn’t in general have a better suggestion.

  8. I am of the view that any change to these guiding principles should be carefully and transparently consulted upon; this is not because I’m particularly attached to the principles above, but because deciding which principles drive Nominet’s behaviour is a key matter of governance. Note this is a different matter to a change in prices (following the guiding principles); I’m happy to leave that to management provided they explain how the change better satisfies the guiding principles. If Nominet don’t publish these principles, or an explanation of how a price change better satisfies them, there is no way members can hold them to account. And whilst I recognise members can occasionally be a pain, there is no one else who can hold Nominet’s management to account. Quite apart from that, transparency is in itself a good thing. As is avoiding the appearance of something that might be problematic to the competition authorities.

  9. What appears to have happened now is that there are no guiding principles, or at least none that we know about. The suggestion that prices are set according to cost recovery principles (never particularly felicitously worded) is simultaneously being removed from the terms and conditions. Is the principle now profit maximisation? If so, please come out and say it. Is the principle now ‘whatever the management feel like’? That is not in my view acceptable. But the principles seem to have disappeared. Prices appear to be being set on the basis that ‘Nominet think they should be higher’. If this is not in fact the case, then Nominet has a communications problem.

  10. Lastly, there seems to have been some bizarre criticism of the co-marketing programmes proposed. The objection is that those who register the most domains get the most co-marketing, and that this is unfair. It seems to be me self-evident that those who register the most domains should get the most co-marketing funds, as they are meant to be put towards registering domains. Rather, my problem with them is that the co-marketing funds for the larger registrars are too small. How do I work that out? From the site calling for an EGM: ‘Registrars with over 250,000 domains under management can now claim up to £80,000 per registrar. Smaller registrars with under 5000 domains can only claim £2000.’ This completely misses the point. For an organisation with 250,001 domains, Nominet’s providing GBP 0.32 per domain back, reducing the price for that year to GBP 3.43. For an organisation with 4,999 domain names, Nominet’s providing GBP 1.25 per domain back, reducing the price for that year down to GBP 1.25. Or to put it another way, if you have 4,999 domain names as a registrar, and claim your full co-marketing allowance, you will be far better off than before (even if the number of domain names stays the same); if you have 250,000 domain names, you will be worse off than before, unless you increase the number of domain names you sell quite substantially. Every co-marketing program I’ve seen before scales in the other direction – i.e. the larger you are, the better deal you get per item sold. Rather than a bulk discount, Nominet is applying a bulk penalty! Whilst I am sure it has its reasons for this, I have no idea why smaller registrars are complaining it’s unfair on them. Of course not all registrars may be eligible to apply, but that’s not dependent on the size of the registrar. And the co-marketing is presumably directed at generating new registrations rather than renewals (this is co-marketing, presumably meaning it is dependent on Nominet related marketing spend from the registrar); whilst that may hit those with domain portfolios they are not growing harder, that’s not dependent on size either, and is presumably a desired result (encouraging people to grow the number of domains under management as opposed to merely renew an existing portfolio).

So, back to that petition:

  • Yes, Nominet should have (and should now) consult on any change to its pricing principles, and not change the prices until it has done so; but
  • No, Nominet need not consult (let alone have a vote) on the price change itself

But I think it unsurprising that people are annoyed.

by Alex Bligh at December 11, 2015 09:48 PM

Andy Smith (

Installing Debian by PXE using Supermicro IPMI Serial over LAN

Here’s how to install Debian jessie on a Supermicro server using PXE boot and the IPMI serial-over-LAN.

Using these instructions you will be able to complete an install of a remote machine, although you will initially need access to the BIOS to configure the IPMI part.

BIOS settings

This bit needs you to be in the same location as the machine, or else have someone who is make the required changes.

Press DEL to go into the BIOS configuration.

Under Advanced > PCIe/PCI/PnP Configuration make sure that the network interface through which you’ll reach your PXE server has the “PXE” option ROM set:


Under Advanced > Serial Port Console Redirection you’ll want to enable SOL Console Redirection.

BIOS serial console redirection

(Pictured here is also COM1 Console Redirection. This is for the physical serial port on the machine, not the one in the IPMI.)

Under SOL Console Redirection Settings you may as well set the Bits per second to 115200.

BIOS SOL redirection settings

Now it’s time to configure the IPMI so you can interact with it over the network. Under IPMI > BMC Network Configuration, put the IPMI on your management network:

IPMI network configuration

Connecting to the IPMI serial

With the above BIOS settings in place you should be able to save and reboot and then connect to the IPMI serial console. The default credentials are ADMIn / ADMIN which you should of course change with ipmitool, but that is for a different post.

There’s two ways to connect to the serial-over-LAN: You can ssh to the IPMI controller, or you can use ipmitool. Personally I prefer ssh, but the ipmitool way is like this:

$ ipmitool -I lanplus -H -U ADMIN -a sol activate

The ssh way:

$ ssh ADMIN@
The authenticity of host ' (' can't be established.
RSA key fingerprint is b7:e1:12:94:37:81:fc:f7:db:6f:1c:00:e4:e0:e1:c4.
Are you sure you want to continue connecting (yes/no)?
Warning: Permanently added ',' (RSA) to the list of known hosts.
ADMIN@'s password:
ATEN SMASH-CLP System Management Shell, version 1.05
Copyright (c) 2008-2009 by ATEN International CO., Ltd.
All Rights Reserved 
-> cd /system1/sol1
-> start
press <Enter>, <Esc>, and then <T> to terminate session
(press the keys in sequence, one after the other)

They both end up displaying basically the same thing.

The serial console should just be displaying the boot process, which won’t go anywhere.

DHCP and TFTP server

You will need to configure a DHCP and TFTP server on an already-existing machine on the same LAN as your new server. They can both run on the same host.

The DHCP server responds to the initial requests for IP address configuration and passes along where to get the boot environment from. The TFTP server serves up that boot environment. The boot environment here consists of a kernel, initramfs and some configuration for passing arguments to the bootloader/kernel. The boot environment is provided by the Debian project.


I’m using isc-dhcp-server. Its configuration file is at /etc/dhcp/dhcpd.conf.

You’ll need to know the MAC address of the server, which can be obtained either from the front page of the IPMI controller’s web interface (i.e. in this case) or else it is displayed on the serial console when it attempts to do a PXE boot. So, add a section for that:

subnet netmask {
host foo {
    hardware ethernet 0C:C4:7A:7C:28:40;
    filename "pxelinux.0";
    option subnet-mask;
    option routers;

Here we set the network configuration of the new server with fixed-address, option subnet-mask and option routers. The IP address in next-server refers to the IP address of the TFTP server, and pxelinux.0 is what the new server will download from it.

Make sure that is running:

# service isc-dhcp-server start

DHCP uses UDP port 67, so make sure that is allowed through your firewall.


A number of different TFTP servers are available. I use tftpd-hpa, which is mostly configured by variables in /etc/default/tftp-hpa:


TFTP_DIRECTORY is where you’ll put the files for the PXE environment.

Make sure that the TFTP server is running:

# service tftpd-hpa start

TFTP uses UDP port 69, so make sure that is allowed through your firewall.

Download the netboot files from your local Debian mirror:

$ cd /srv/tftp
$ curl -s | sudo tar zxvf -

(This assumes you are installing a device with architecture amd64.)

At this point your TFTP server root should contain a debian-installer subdirectory and a couple of links into it:

$ ls -l .
total 8
drwxrwxr-x 3 root root 4096 Jun  4  2015 debian-installer
lrwxrwxrwx 1 root root   47 Jun  4  2015 ldlinux.c32 -> debian-installer/amd64/boot-screens/ldlinux.c32
lrwxrwxrwx 1 root root   33 Jun  4  2015 pxelinux.0 -> debian-installer/amd64/pxelinux.0
lrwxrwxrwx 1 root root   35 Jun  4  2015 pxelinux.cfg -> debian-installer/amd64/pxelinux.cfg
-rw-rw-r-- 1 root root   61 Jun  4  2015

You could now boot your server and it would call out to PXE to do its netboot, but would be displaying the installer process on the VGA output. If you intend to carry it out using the Remote Console facility of the IPMI interface then that may be good enough. If you want to do it over the serial-over-LAN though, you’ll need to edit some of the files that came out of the netboot.tar.gz to configure that.

Here’s a list of the files you need to edit. All you are doing in each one is telling it to use serial console. The changes are quite mechanical so you can easily come up with a script to do it, but here I will show the changes verbosely. All the files live in the debian-installer/amd64/boot-screens/ directory.

ttyS1 is used here because this system has a real serial port on ttyS0. 115200 is the baud rate of ttyS1 as configured in the BIOS earlier.



label expert
        menu label ^Expert install
        kernel debian-installer/amd64/linux
        append priority=low vga=788 initrd=debian-installer/amd64/initrd.gz --- 
include debian-installer/amd64/boot-screens/rqtxt.cfg
label auto
        menu label ^Automated install
        kernel debian-installer/amd64/linux
        append auto=true priority=critical vga=788 initrd=debian-installer/amd64/initrd.gz --- quiet


label expert
        menu label ^Expert install
        kernel debian-installer/amd64/linux
        append priority=low console=ttyS1,115200n8 initrd=debian-installer/amd64/initrd.gz --- 
include debian-installer/amd64/boot-screens/rqtxt.cfg
label auto
        menu label ^Automated install
        kernel debian-installer/amd64/linux
        append auto=true priority=critical console=ttyS1,115200n8 initrd=debian-installer/amd64/initrd.gz --- quiet


label rescue
        menu label ^Rescue mode
        kernel debian-installer/amd64/linux
        append vga=788 initrd=debian-installer/amd64/initrd.gz rescue/enable=true --- quiet


label rescue
        menu label ^Rescue mode
        kernel debian-installer/amd64/linux
        append console=ttyS1,115200n8 initrd=debian-installer/amd64/initrd.gz rescue/enable=true --- quiet


# D-I config version 2.0
# search path for the c32 support libraries (libcom32, libutil etc.)
path debian-installer/amd64/boot-screens/
include debian-installer/amd64/boot-screens/menu.cfg
default debian-installer/amd64/boot-screens/vesamenu.c32
prompt 0
timeout 0


serial 1 115200
console 1
# D-I config version 2.0
# search path for the c32 support libraries (libcom32, libutil etc.)
path debian-installer/amd64/boot-screens/
include debian-installer/amd64/boot-screens/menu.cfg
default debian-installer/amd64/boot-screens/vesamenu.c32
prompt 0
timeout 0


default install
label install
        menu label ^Install
        menu default
        kernel debian-installer/amd64/linux
        append vga=788 initrd=debian-installer/amd64/initrd.gz --- quiet


default install
label install
        menu label ^Install
        menu default
        kernel debian-installer/amd64/linux
        append console=ttyS1,115200n8 initrd=debian-installer/amd64/initrd.gz --- quiet

Perform the install

Connect to the serial-over-LAN and get started. If the server doesn’t have anything currently installed then it should go straight to trying PXE boot. If it does have something on the storage that it would boot then you will have to use F12 at the BIOS screen to convince it to jump straight to PXE boot.

$ ssh ADMIN@
ADMIN@'s password:
ATEN SMASH-CLP System Management Shell, version 1.05
Copyright (c) 2008-2009 by ATEN International CO., Ltd.
All Rights Reserved 
-> cd /system1/sol1
-> start
press <Enter>, <Esc>, and then <T> to terminate session
(press the keys in sequence, one after the other)
Intel(R) Boot Agent GE v1.5.13                                                  
Copyright (C) 1997-2013, Intel Corporation                                      
CLIENT MAC ADDR: 0C C4 7A 7C 28 40  GUID: 00000000 0000 0000 0000 0CC47A7C2840  
CLIENT IP:  MASK:  DHCP IP:             
PXELINUX 6.03 PXE 20150107 Copyright (C) 1994-2014 H. Peter Anvin et al    
                 │ Debian GNU/Linux installer boot menu  │
                 │ Install                               │
                 │ Advanced options                    > │
                 │ Help                                  │
                 │ Install with speech synthesis         │
                 │                                       │
                 │                                       │
                 │                                       │
                 │                                       │
                 │                                       │
                 │                                       │
              Press ENTER to boot or TAB to edit a menu entry
  ┌───────────────────────┤ [!!] Select a language ├────────────────────────┐
  │                                                                         │
  │ Choose the language to be used for the installation process. The        │
  │ selected language will also be the default language for the installed   │
  │ system.                                                                 │
  │                                                                         │
  │ Language:                                                               │
  │                                                                         │
  │                               C                                         │
  │                               English                                   │
  │                                                                         │
  │     <Go Back>                                                           │
  │                                                                         │
<Tab> moves; <Space> selects; <Enter> activates buttons

…and now the installation proceeds as normal.

At the end of this you should be left with a system that uses ttyS1 for its console. You may need to tweak that depending on whether you want the VGA console also.

by Andy at December 11, 2015 06:50 PM

December 10, 2015

Andy Smith (

Audience tickets for Stewart Lee’s Comedy Vehicle

Last night Jenny and I got the chance to be in the audience for a recording of what will become (some percentage of) four episodes of Stewart Lee’s Comedy Vehicle season 4. Once we actually got in it was a really enjoyable experience, although as usual SRO Audiences were somewhat chaotic with their ticketing procedures.

I’d heard about the chance to get priority audience tickets from the Stewart Lee mailing list, so I applied, but the tickets I got were just their standard ones. From past experience I knew this would mean having to get there really early and queue for ages and still not be sure of getting in, so for most shows on the SRO Audiences site I don’t normally bother. As I particularly like Stewart Lee I decided to persevere this time.

The instructions said they’d be greeting us from 6.20pm, so I decided getting there about an hour early would be a good idea. I know from past experience that they massively over-subscribe their tickets in order to never ever have empty seats. That makes it very difficult to guess how early to be, and I hadn’t been to a Comedy Vehicle recording before either.

The venue was The Mildmay Club in Stoke Newington which was also the venue for all previous recordings of Comedy Vehicle. A bit of a trek from Feltham – train to Richmond then most of the way along the Overground towards Stratford; a good 90 minutes door to door. Nearest station Canonbury but we decided to go early and get some food at Nando’s Dalston first.

We got to the Mildmay Club about 5.25pm and there were already about 15 people queuing outside. Pretty soon the doorman let us in, but only as far as a table just inside the doors where a guy gave us numbered wristbands and told us to come back at 7pm.

This was a bit confusing as we weren’t sure whether that meant we were definitely getting in or if we’d still have to queue (and thus should actually come back before 7). So I asked,

“does the wristband mean we’re definitely getting in?”

“We’ll do our best to get as many people in as we can. We won’t know until 7pm,”

was the non-answer. People piling up behind us and they wanted us out of the way, so off we went.

Having already eaten we didn’t really have anything else to do, so we had a bit of an aimless wander around Newington Green for half an hour or so before arriving back outside the club again, where the queue was now a crowd bustling around the entrance and trailing off in both directions along the street. We decided to get back in the queue going to the right of the club, which was slowly shrinking, with the idea of asking if we were in the right place once we got to the front. All of the people in this queue were yet to collect their wristbands.

Having got to the front of this queue it was confirmed that we should wait around outside until 7pm, though still no idea whether we would get in or by what process this would be decided. We shuffled into the other queue to the left of the club which consisted of people like us who already had wristbands.

While in this queue, we heard calls for various colours of wristband that weren’t ours (white), and eventually all people in front of us had been called into the club. By about quarter past 6 we’d watched quite a large number of people with colourful wristbands get into the building, and we were starting to seriously consider that we might not be getting into this thing, despite the fact that we were amongst the first 15 people to arrive.

At this point a different member of staff came out and told us off for queuing to the left of the club, because

“you’re not allowed to queue past the shops”

and told us to queue to the right with all the other people who still hadn’t got wristbands yet. Various grumblings on the subject of the queue being really long and how will we know what is going on were heard, to which the response was,

“it doesn’t matter where you are, your wristbands are numbered and we’ll call people in number order anyway. You can go away and come back at 7pm if you like. Nothing is happening before 7pm.”

Well, we didn’t have anything else to do for the next 45 minutes anyway, and there was lack of trust that everyone involved was giving us the same/correct information, so we decided to remain in this mostly-linear-collection-of-people-which-was-not-a-queue-because-it-would-be-called-in-number-order.

About 6.55pm a staff member popped their head out the door and shouted,

“we’re delayed by about ten minutes but we do love you and we’ll start getting you inside soon.”

And then just a minute or two later he’s back and shouting out,

“wristband numbers below 510, come this way!”

We were 506 and 507.

The exterior of the Mildmay Club isn’t in the best condition. It looks pretty shabby. Inside though it’s quite nice. We were ushered into the bar area which is pretty much the same as the bar of every working men’s club or British Legion club that you have ever seen.

Even though we were amongst the first few white wristband people in, the room was really full already. These must have been all the priority ticket people we saw going in ahead of us. Nowhere for us to sit except the edge of a low stage directly in front of a speaker pumping out blues and Hendrix. Again we started to worry that we would not be getting in to the recording.

It must have been about 7.20pm when they started calling the colourful wristband people out of the bar and in to the theatre. The room slowly drained until it seemed like there were only about ten of us left. And then,

“white wristbands numbered 508 and below please!”

We rushed into the theatre to be confronted with mostly full seating.

“You want to be sat together don’t you?”


“Oh, just take those reserved seats, they’ve blown it now, they’re too late.”

Score! I prodded Jenny in the direction of a set of four previously reserved seats that were in a great position. We were amongst the last twenty or so people to get in. I think if we had shown up even ten minutes later to get the wristbands then we wouldn’t have made it.

In contrast to the outside of the building the theatre itself was really quite nice, very interesting decor, and surprisingly large compared to the impression you get from seeing it on television.

Stewart did two sets of 28 minute pieces, then a short interval and then another 2×28 minutes, so almost two hours. I believe there were recordings on three nights so that’s potentially 12 episodes worth of material, but given that

  1. All the previous series had 6 episodes.
  2. Stewart made a comment at one point about moving something on stage for continuity with the previous night’s recording.

then I assume there’s two recordings of each episode’s material from which they’ll edit together the best bits.

The material itself was great, so fans of Comedy Vehicle have definitely got something to look forward to. If you have previously attempted to consume Stewart Lee’s comedy and found the experience unpalatable then I don’t think anything is going to change for you – in fact it might upset you even more, to be honest. Other than that I’m not going to say anything about it as that would spoil it and I couldn’t do it justice anyway.

Oh, apart from that it’s really endearing to see Stew make himself laugh in the middle of one of his own rants and have to take a moment to compose himself.

As for SRO Audiences, I possibly shouldn’t moan as I have no actual experience of trying to cram hundreds of people into a free event and their first concern has got to be having the audience side of things run smoothly for the production, not for the audience. I get that. All I would say is that:

  • Being very clear with people at wristband issuing time that they will be called in by number, and giving a realistic time for when the numbers would be called, would be helpful. This wasn’t clear for us so on the one hand we hung around being in the way a bit, but on the other hand I’m glad that we didn’t leave it until 7pm to come back because our numbers were called before 7pm and we did only just get in.
  • Doing your best to turn people away early when they have no realistic chance of getting in would be good. There were loads of people with higher number wristbands than us that we did not see in the theatre later. Unsure if they got eventually sent home or if they ended up watching the recording on TV in the bar. At previous SRO Audiences recordings I’ve waited right up until show start time to be told to go home though.

by Andy at December 10, 2015 06:59 AM

December 07, 2015

Aled Treharne (ThinkSIP)

Police misleading the public to pass the Investigatory Powers Bill?

I was browsing twitter this evening when I came across the following tweet, published by an agency that I have a lot of respect for, the National Crime Agency. They’ve been doing some work with Channel 4 and are publicising an upcoming documentary. Part of that campaign seems to have led to this tweet:

The link in their email points to an infographic:
Misleading NCA infographic about the use of comms data in a missing person case

I had to read this infographic more than once – it’s so misleading that I had to check how many different problems there were with it. Although I replied to the NCA, 140 characters is a bit limiting for a response, so let me respond, in my professional opinion as a communications industry expert and with over a decade of experience in helping the Police with missing persons cases through Mountain Rescue.

So the infographic starts with a story about Amy, a missing 14 year old girl whose parents are unable to reach her because her phone is switched off. Let’s skip over the fact that this is not a surprising situation for a teenage girl, and suppose that there is something untold in the situation that places Amy in the “high risk” category that requires immediate investigation to prevent her from coming to harm.

At this point, the police request call data records from her mobile phone provider, but “…Amy uses online applications on her smart phone to make calls and send instant messages so no useful data is returned.” I’ll come back to this, but let’s take that at face value for the moment.

The police request that her mobile operator then provides communications data (presumably what the Investigatory Powers Bill is referring to as “Internet Connection Records”), but as they don’t store it, they can’t provide it. The police are stumped and can’t help Amy. Poor Amy.

The infographic then goes on to say how access to those records could help provided “key investigative leads” to trace Amy and reach her before she’s harmed.

Seems reasonable, right? Well, no.

First of all, Mobile Network Operators (MNOs) don’t just store call data records – they store a whole host of useful operational information on your mobile including information about the cell that you are or were connected to. This is useful operational information for the MNO, but is also useful for the security and law enforcement agencies (let’s just call them Law Enforcement Agencies, or LEAs for now). As a result, I wouldn’t be at all surprised if this information was not already the subject of an Order for retention under the current regime on all of the MNOs – however, that information is Classified Information, so we can only make educated guesses. That would give the police a good idea where the phone was (with a resolution of anywhere down to a few hundred metres normally) when it was turned off – in some cases, they could even ask the MNO to “ping” the phone from multiple cells to get a very accurate picture of where the handset is – but let’s assume the battery’s been taken out or the mobile destroyed. What else?

Sticking with communications data for a moment, what could that data have revealed if the MNO had been keeping it? Well, it could have told the police that Amy went to Google. She also used WhatsApp, twitter, Google Hangouts, Facebook, Hotmail and a whole host of other random websites.

That data’s limited to what service she was using and when – so for example, you can’t tell if she looked up the number of a local taxi company or searched for a bus timetable on Google. You can’t tell if she sent a message on WhatsApp, twitter, Hangouts or Facebook – because those apps maintain connections with their services on a semi-permanent basis to receive notifications of new messages. My twitter feed for example gets updated on my phone even when I’m not using it. What could the police do with this data? Well, they could approach a Judge and ask for warrants of intercept for each and every one of those services in case she uses them again – none of those are UK based, so they wouldn’t be subject to this new law. Whether the Judge would want more than just “she used them” to get a warrant is a good question. Or perhaps they could check those services and see if she posted anything publicly. Great, so this data was useful, right?

Well, sort of – asking Amy’s mum whether she used facebook reveals that yes, she’s always on it talking to her BFF, Brenda. A quick chat with Brenda, where the police explain the urgency here, and Brenda tells them all about Amy’s new boyfriend who she met online and was going to meet this morning. Or perhaps Brenda tells them about the album that Amy’s favourite band have just released and that she’s gone to buy it in defiance of her mum’s instructions. Or maybe Brenda doesn’t know anything, but she can at least tell the Police what services she uses to communicate with Amy, because Email is just soooo last year, everyone’s using Facebook Messenger at the moment…

“Human intelligence”, or information gathered directly from people rather than their communications has always been favoured by the LEAs and security services for many reasons – it brings along with it a wealth of useful and sometimes unintentional information. Brenda here was a much better source of information than Amy’s MNO because Brenda can provide context and information that isn’t transmitted by any communications services. Moreover, the MNO has given police a long list of service providers who they need to contact and the majority of those are outside the UK making warrants to get access to that data lengthy, time-consuming affairs.

So far then, the communications data hasn’t been as useful as what is occasionally referred to as “good, old-fashioned policing” – talking to relatives, friends and other people who are connected to this individual.

And this brings me neatly to my last point – that the NCA seems to think that there’s nothing else that the police can do. Now, I’ve been involved in dozens of missing persons (MisPer) investigations for high-risk MisPers and the communications data is a minuscule part of the investigation. CCTV cameras, taxi company bookings, conversations with family, friends, neighbours, bank records, credit card records – all of these can help track our activity. More than that, however they can provide insights into the one thing that the communications data can’t – our reason for doing something and our state of mind. A new boyfriend. An anniversary of the death of a loved one. A pending bankruptcy. An argument with a parent. All of these are hugely important in building up a profile of the missing person and will tell police where they’re likely to be. Unsurprisingly, this is well documented procedure in the Police Search Manual published by the College of Policing and that, together with the data from the Centre for Search Research, such as the UK Missing Persons Behaviour Study can be incredibly accurate and useful in guiding the next steps. To suggest then, that the police are stumped because they can’t get access to the communications data is so misleading as to be almost lying to the public and is doing an enormous disservice to the difficult and complex work that Police Search Advisors undertake as part of a search for a vulnerable missing person.

I’ve been watching the progress of this bill with interest and far better people than I have commented already – but this bit of poorly written and sensationalist misinformation from the National Crime Agency angered me – not only because it’s trying to influence the direction of a political bill through tugging on heartstrings using information that’s just plain wrong, but that it diminishes the skills and efforts of those teams of people whose job it is to find these vulnerable missing persons.

Hopefully, someone from the National Crime Agency will read this and reply as to why they thought this was acceptable, but I’m not holding my breath.

This post has been edited to correct the title of the bill, which I originally referred to as the Communications Data Bill – this was the original Snooper’s Charter in 2012, and not the new Investigatory Powers Bill which is currently being proposed.

by Aled Treharne at December 07, 2015 10:25 AM

December 06, 2015

Liam Proven

Electric cars & desktop Linux: eternal niche players, never going to take over? [tech blog post]

Both are niche today. Conceded, yes, but… and it’s a big “but”…

It depends on 2 things: how you look at it, & possible changes in circumstances.

Linux *on the desktop* is niche, sure. But that’s because of the kind of desktop/laptop usage roles techies see.

In other niches:

The URL explains the main story: 51% of American classroom computers are Chromebooks now. That’s a lot, and that’s 100% Linux.

And it’s happened quite quickly (in under 3y), without noise or fuss, without anyone paying a lot of attention. That’s how these changes often happen: under the radar, unnoticeably until suddenly you wake up & it’s all different.

In servers, it utterly dominates. On pocket smart devices, it utterly dominates.

But look at conventional adults’ desktops and laptops, no, it’s nowhere, it’s niche.

So, for now, on the road as private vehicles, e-cars are a small niche, yes.

But, in some role we’re not thinking about — public transport, or taxis, or something other than private cars — they might quietly gain the edge and take over without us noticing, as Chromebooks are doing in some niches.

The result, of course, is that they’re suddenly “legitimised” — there’s widespread knowledge, support, tooling, whatever and suddenly changes in some other niche mean that they’re a lot more viable for private cars.

For years, I ran the fastest computer I could afford. Often that was for very little money, because in the UK I was poor for a long time. I built and fixed and bodged. My last box was a honking big quad-core with 8GB of RAM (from Freecycle) with a dual-head 3D card (a friend’s cast-off) and lots of extras.

Then I sold, gave or threw away or boxed up most of my stuff, came over here, and had money but less space and less need to bodge. So I bought a friend’s old Mac mini. I’m typing on it now, on a 25y old Apple keyboard via a converter.

It’s tiny, silent except when running video or doing SETI, and being a Mac takes no setup or maintenance. So much less work than my Hackintosh was.

Things change, and suddenly an inconceivable solution is the sensible or obvious one. I don’t game much — very occasional bit of Portal - so I don’t need a GPU. I don’t need massive speed so a Core i5 is plenty. I don’t need removable media any more, or upgradability, or expandability.

Currently, people buy cars like my monster Hackintosh: used, cheap, but big, spacious, powerful, with lots of space in ‘em, equally capable of going to the shops or taking them to the other end of the country — or a few countries away. Why? Well that’s because most cars are just like that. It’s normal. It doesn’t cost anything significant.

But in PCs, that’s going away. People seem to like laptops and NUCs and net-tops and Chromebooks and so on: tiny, no expansion slots, often no optical media, not even ExpressCard slots or the like any more — which were standard a decade or 2 ago. With fast external serial buses, we don’t need them any more.

Big bulky PCs are being replaced by small, quiet, almost-unexpandable ones. Apple is as ever ahead of the trade: it doesn’t offer any machines with expansion slots at all any more. You get notebooks, iMacs, Mac minis or the slotless built-around-its-cooling PowerMac, incapable of even housing a spinning hard disk.

Why? When they’re this bloody fast anyway, only hobbyist dabblers change CPUs or GPUs. Everyone else uses it ’till it dies then replaces it.

Cars may well follow. Most only do urban cycle motoring: work, shops, occasional trip to the seaside or something. Contemporary electric cars do that fine and they’re vastly cheaper to run. And many don’t need ‘em daily so use car clubs such as Zipcar etc.

Perhaps the occasional longer trips will be taken up by some kind of cheap rentals, or pooling, or something unforeseen.

But it’s a profound error of thinking to write them off as being not ready yet, or lacking infrastructure, or not viable. They are, right now, and they are creeping in.

We are not so very far from the decline and fall of Windows and the PC. It might not happen, but with Mac OS X and Chromebooks and smarter tablets and convertibles and so on, the enemies are closing in. Not at the gate yet, but camped all around.

Electric vehicles aren’t quite there yet but they’re closer than the comments in this thread — entirely typically for the CIX community — seem to think.

December 06, 2015 12:11 PM

December 04, 2015

Liam Proven

Reg writings (recent publication summary, for anyone interested) [post on my tech blog]

Not to blow my own horn, but, er, well, OK, yes I am. Tootle-toot.

I'm back writing for the Register again, in case you hadn't noticed. A piece a month for the last 2 months.

You might enjoy these:

Old, not obsolete: IBM takes Linux mainframes back to the future
Your KVMs, give them to me

From Zero to hero: Why mini 'puter Oberon should grab Pi's crown
It's more kid-friendly... No really

December 04, 2015 05:40 PM

Steve Kennedy

Socksy, get new socks every quarter

Socksy held their relaunch party last night at the Groucho club in Soho.

Socksy is a socks as a service company (SaaS), you sign-up and then get 3 pairs of high quality socks (street, neat or chic) delivered to your door every three months. They come in an A4 box so it will fit through your letterbox and the box can then be used to file papers etc.

The socks are all high quality and though the preferred Socksy method is for the subscriptions, it's also possible to buy them individually.

Socksy Mens' socks are generally knee length which takes a bit of getting used to if your used to wearing ankle length socks, but they're meant to be the 'real deal' and they're not socks if not knee length. Who knew?

If anyone wants to use the service there a 25% discount off the normal £60 per month service, by using code "lucky feet" (without quotes) the subscription fee is reduced to £45 every 3 months, so that's 3 pairs of high quality socks i.e. £15 a pair.

Socksy are also on Twitter , Facebook, Instagram and Tumblr.

by Steve Karmeinsky ( at December 04, 2015 12:43 PM

November 30, 2015

Steve Kennedy

Last chance to control your home with nCube

nCube is a home hub that connects to your home network and then allows you to control devices in your home through the nCube app.

It works with lots of home devices such as Nest thermostat, Philips Hue and LIFX lights, sonos music and Belkin WeMo plug-in & lightbulb products.

Protocols supported are WiFI, Bluetooth and Z-Wave.

The device is secure (the phone app must be set-up on the home network) and the nCube device uses a VPN into the nCube cloud services.

As well as being functional the nCube app is designed to be easy to use and has won several design awards.

The nCube Kickstarter campaign finishes in 3 days so get backing, it's still possible to get one for an early bird price of £99 (it will retail for £139).

by Steve Karmeinsky ( at November 30, 2015 09:59 PM

Mike Hughes

Weird Harmonics? A Whistling HP 6830 Printer?

I have a HP 6830 printer.

It is making a high-pitched, barely audible, but just enough to be annoying (to me anyway) whistling/singing noise, that sounds a bit like modem noise.

The frequency and pitch of the noise changes when it’s wifi client is switched off. It’s still there, just slightly different.

The whistling noise is still present even when the printer is in standby.

It changes pitch again when it is powered down.

It goes completely once the mains lead is removed, and returns when the mains lead is plugged back in.

Sadly, I’ve tried recording it, but nothing I’ve got seems to be able to pick it up clearly.

Anyone else got one and come across this?

by Mike Hughes at November 30, 2015 11:44 AM

November 24, 2015

Mark Goodge

Christmas creativity needed

I was in Morrisons this afternoon, and noticed that they’ve already started playing Christmas songs. Modulo the usual complaint that the third week in November is too early, it occurred to me that we really need some new Christmas songs. It’s OK to be playing all the traditional favourites, but the problem is that all we seem to have are the old traditional favourites. The Christmas single used to be a staple of the musical season, but bands seem to have largely given up on them in recent years.

Personally, I blame Simon’s Karaoke Show. By creating a guaranteed Christmas number one that’s there on the back of pure marketing hype, not merit – or even unpredictable random popularity – it’s put other artists off trying to aim for the traditional year-ending chart topper.

Maybe the forthcoming demise of the show – it’s widely rumoured that this year will be the last, and not a moment before time – will open up the opportunity for a genuine battle for the Christmas number one, just like we used to have. If so, then maybe there’s an opportunity there for some of our current best-selling artists to come up with something suitable festive.

I’ve got nothing against the likes of Slade, Wizzard, Jona Lewie or Ella Fitzgerald, but they’re hardly contemporary. I reckon that Ed Sheeran or Adele could come up with a cracking Christmas tune. Heck, I’d even put up with something by Justin Beiber or One Direction, if it’s at least original and fresh. So come on, musicians, how about taking up the challenge for Christmas 2016?

by Mark at November 24, 2015 07:02 PM

November 21, 2015

Jonathan McDowell

Updating a Brother HL-3040CN firmware from Linux

I have a Brother HL-3040CN networked colour laser printer. I bought it 5 years ago and I kinda wish I hadn’t. I’d done the appropriate research to confirm it worked with Linux, but I didn’t realise it only worked via a 32-bit binary driver. It’s the only reason I have 32 bit enabled on my house server and I really wish I’d either bought a GDI printer that had an open driver (Samsung were great for this in the past) or something that did PCL or Postscript (my parents have an Xerox Phaser that Just Works). However I don’t print much (still just on my first set of toner) and once setup the driver hasn’t needed much kicking.

A more major problem comes with firmware updates. Brother only ship update software for Windows and OS X. I have a Windows VM but the updater wants the full printer driver setup installed and that seems like overkill. I did a bit of poking around and found reference in the service manual to the ability to do an update via USB and a firmware file. Further digging led me to a page on resurrecting a Brother HL-2250DN, which discusses recovering from a failed firmware flash. It provided a way of asking the Brother site for the firmware information.

First I queried my printer details:

$ snmpwalk -v 2c -c public hl3040cn.local iso.
iso. = STRING: "MODEL=\"HL-3040CN series\""
iso. = STRING: "SPEC=\"0001\""
iso. = STRING: "FIRMID=\"MAIN\""
iso. = STRING: "FIRMVER=\"1.11\""
iso. = STRING: "FIRMVER=\"1.02\""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""

I used that to craft an update file which I sent to Brother via curl:

curl -X POST -d @hl3040cn-update.xml -H "Content-Type:text/xml" --sslv3

This gave me back some XML with a URL for the latest main firmware, version 1.19, filename LZ2599_N.djif. I downloaded that and took a look at it, discovering it looked like a PJL file. I figured I’d see what happened if I sent it to the printer:

cat LZ2599_N.djf | nc hl3040cn.local 9100

The LCD on the front of printer proceeded to display something like “Updating Program” and eventually the printer re-DHCPed and indicated the main firmware had gone from 1.11 to 1.19. Great! However the PCLPS firmware was still at 1.02 and I’d got the impression that 1.04 was out. I didn’t manage to figure out how to get the Brother update website to give me the 1.04 firmware, but I did manage to find a copy of LZ2600_D.djf which I was then able to send to the printer in the same way. This led to:

$ snmpwalk -v 2c -c public hl3040cn.local iso.
iso. = STRING: "MODEL=\"HL-3040CN series\""
iso. = STRING: "SPEC=\"0001\""
iso. = STRING: "FIRMID=\"MAIN\""
iso. = STRING: "FIRMVER=\"1.19\""
iso. = STRING: "FIRMVER=\"1.04\""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""

Cool, eh?

[Disclaimer: This worked for me. I’ve no idea if it’ll work for anyone else. Don’t come running to me if you brick your printer.]

November 21, 2015 01:27 PM

November 16, 2015

Steve Kennedy

Ofcom consults on Walkie Talkies, Level Crossings and DECT phones

Ofcom, the Super Regulator, is holding a consultation on Licence Exemption of Wireless Telegraphy Devices.

Walkie Talkies or as they're officially known as Private Mobile Radio (PMR), are allowed to operate in license exempt status in the band 446.0 - 446.2 MHz. Previously this was split into to bands, 446.0 - 446.1 MHz for Analogue PMR446 equipment and 446.1 - 446.2 MHz for Digital PMR446 equipment.

Ofcom is proposing to make the whole band available for both analogue and digital PMR446 equipment, whereby: -

  • the band 446.0-446.2 MHz for the use of analogue PMR 446 with a channel plan based on 12.5 kHz spacing where the lowest carrier frequency is 446.00625 MHz
  • the band 446.1-446.2 MHz for the use of digital PMR 446 with a channel plan based on 6.25 kHz and 12.5 kHz spacing where the lowest carrier frequencies are 446,103125 MHz and 446.10625 MHz respectively
  • the band 446.0-446.2 MHz for the use of digital PMR 446 with a channel plan based on 6.25 kHz and 12.5 kHz spacing where the lowest carrier frequencies are 446,003125 MHz and 446.00625 MHz respectively as of 1 January 2018
  • analogue PMR446 equipment operating in the frequency range 446.1-446.2 MHz should use more robust receivers as specified in ETSI TS 103 236 or equivalent technical specifications

This would allow any device to transmit max 500mW while NO fixed basestations are allowed and the maximum transmit time would be 180s, This would all come into effect in Jan 2018.

Ofcom also wish to change mandated exclusion zones around radio astronomy sites for level crossing radar to that of co-ordinated exclusion zones i.e. level crossing radar in the exclusions zones could be used with the coordination of the Radio Astronomy sites. The methodology, decision and appeal processes to determine whether a device can be deployed in the coordination zone is to be agreed between the rail network operators and the Radio Astronomy service.

The current exclusion zones are

SiteNGRExclusion zone
Jodrell BankSJ 79650 5095020 km
CambridgeTL 39400 5400020 km
DeffordSO 90200 44700 20 km
DarnhallSJ 64275 6226520 km
KnockinSJ 32855 21880 20 km
PickmereSJ 70404 76945 20 km

DECT equipment has been license exempt for a number of years, operating in the band 1880 to 1900 MHz. Currently the document exempting the equipment states that a handset connects to a basestation and it is proposed to just change the to a short range device (SRD) to make it more applicable to handsets that aren't connected to the telephone network.

Stakeholders wishing to respond may do so using Ofcom's on-line form.

by Steve Karmeinsky ( at November 16, 2015 04:53 PM

November 13, 2015

Andy Smith (

Supermicro IPMI remote console on Ubuntu 14.04 through SSH tunnel

I normally don’t like using the web interface of Supermicro IPMI because it’s extremely clunky, unintuitive and uses Java in some places.

The other day however I needed to look at the console of a machine which had been left running Memtest86+. You can make Memtest86+ output to serial which is generally preferred for looking at it remotely, but this wasn’t run in that mode so was outputting nothing to the IPMI serial-over-LAN. I would have to use the Java remote console viewer.

As an added wrinkle, the IPMI network interfaces are on a network that I can’t access except through an SSH jump host.

So, I just gave it a go without doing anything special other than launching an SSH tunnel:

$ ssh me@jumphost -L127.0.0.1:1443: -N

This tunnels my localhost port 1443 to port 443 of as available from the jump host. Local port 1443 used because binding low ports requires root privileges.

This allowed me to log in to the web interface of the IPMI at https://localhost:1443/, though it kept putting up a dialog which said I needed a newer JDK. Going to “Remote Control / Console Redirection” attempted to download a JNLP file and then said it failed to download.

This was with openjdk-7-jre and icedtea-7-plugin installed.

I decided maybe it would work better if I installed the Oracle Java 8 stuff (ugh). That was made easy by following these simple instructions. That’s an Ubuntu PPA which does everything for you, after you agree that you are a bad person who should feel badaccept the license.

This time things got a little further, but still failed saying it couldn’t download a JAR file. I noticed that it was trying to download the JAR from even though my tunnel was on port 1443.

I eventually did get the remote console viewer to work but I’m not 100% convinced it was because I switched to Oracle Java.

So, basic networking issue here. Maybe it really needs port 443?

Okay, ran SSH as root so it could bind port 443. Got a bit further but now says “connection failed” with no diagnostics as to exactly what connection had failed. Still, gut instinct was that this was the remote console app having started but not having access to some port it needed.

Okay, ran SSH as a SOCKS proxy instead, set the SOCKS proxy in my browser. Same problem.

Did a search to see what ports the Supermicro remote console needs. Tried a new SSH command:

$ sudo ssh me@jumphost \
-L127.0.0.1:443: \
-L127.0.0.1:5900: \
-L127.0.0.1:5901: \
-L127.0.0.1:5120: \
-L127.0.0.1:5123: -N

Apart from a few popup dialogs complaining about “MalformedURLException: unknown protocol: socket” (wtf?), this now appears to work.

Supermicro IPMI remote console

by Andy at November 13, 2015 05:04 AM

November 09, 2015

David Cantrell

Jonathan McDowell

The Joy of Recruiters

Last week Simon retweeted a link to Don’t Feed the Beast – the Great Tech Recruiter Infestation. Which reminded me I’d been meaning to comment on my own experiences from earlier in the year.

I don’t entertain the same level of bile as displayed in the post, but I do have a significant level of disappointment in the recruitment industry. I had conversations with 3 different agencies, all of whom were geographically relevant. One contacted me, the other 2 (one I’d dealt with before, one that was recommended to me) I contacted myself. All managed to fail to communicate with any level of acceptability.

The agency hat contacted me eventually went quiet, after having asked if they could put my CV forward for a role and pushing very hard about when I could interview. The contact in the agency I’d dealt with before replied to say I was being passed to someone else who would get in contact. Who of course didn’t. And the final agency, who had been recommended, passed me between 3 different people, said they were confident they could find me something, and then went dark except for signing me up to their generic jobs list which failed to have anything of relevance on it.

As it happens my availability and skill set were not conducive to results at that point in time, so my beef isn’t with the inability to find a role. Instead it’s with the poor levels of communication presented by an industry which seems, to me, to have communication as part of the core value it should be offering. If anyone had said at the start “Look, it’s going to be tricky, we’ll see what we can do” or “Look, that’s not what we really deal in, we can’t help”, that would have been fine. I’m fine with explanations. I get really miffed when I’m just left hanging.

I’d love to be able to say I’ll never deal with a recruiter again, but the fact of the matter is they do serve a purpose. There’s only so far a company can get with word of mouth recruitment; eventually that network of personal connections from existing employees who are considering moving dries up. Advertising might get you some more people, but it can also result in people who are hugely inappropriate for the role. From the company point of view recruiters nominally fulfil 2 roles. Firstly they connect the prospective employer with a potentially wider base of candidates. Secondly they should be able to do some sort of, at least basic, filtering of whether a candidate is appropriate for a role. From the candidate point of view the recruiter hopefully has a better knowledge of what roles are out there.

However the incentives to please each side are hugely unbalanced. The candidate isn’t paying the recruiter. “If you’re not paying for it, you’re the product” may be bandied around too often, but I believe this is one of the instances where it’s very applicable. A recruiter is paid by their ability to deliver viable candidates to prospective employers. The delivery of these candidates is the service. Whether or not the candidate is happy with the job is irrelevant beyond them staying long enough that the placement fee can be claimed. The lengthy commercial relationship is ideally between the company and the recruitment agency, not the candidate and the agency. A recruiter wants to be able to say “Look at the fine candidate I provided last time, you should always come to me first in future”. There’s a certain element of wanting the candidate to come back if/when they are looking for a new role, but it’s not a primary concern.

It is notable that the recommendations I’d received were from people who had been on the hiring side of things. The recruiter has a vested interest in keeping the employer happy, in the hope of a sustained relationship. There is little motivation for keeping the candidate happy, as long as you don’t manage to scare them off. And, in fact, if you scare some off, who cares? A recruiter doesn’t get paid for providing the best possible candidate. Or indeed a candidate who will fully engage with the role. All they’re required to provide is a hire-able candidate who takes the role.

I’m not sure what the resolution is to this. Word of mouth only scales so far for both employer and candidate. Many of the big job websites seem to be full of recruiters rather than real employers. And I’m sure there are some decent recruiters out there doing a good job, keeping both sides happy and earning their significant cut. I’m sad to say I can’t foresee any big change any time soon.

[Note I’m not currently looking for employment.]

[No recruitment agencies were harmed in the writing of this post. I have deliberately tried to avoid outing anyone in particular.]

November 09, 2015 05:45 PM

Andy Smith (

Linux Software RAID and drive timeouts

All the RAIDs are breaking

I feel like I’ve been seeing a lot more threads on the linux-raid mailing list recently where people’s arrays have broken, they need help putting them back together (because they aren’t familiar with what to do in that situation), and it turns out that there’s nothing much wrong with the devices in question other than device timeouts.

When I say “a lot”, I mean, “more than I used to.”

I think the reason for the increase in failures may be that HDD vendors have been busy segregating their products into “desktop” and “RAID” editions in a somewhat arbitrary fashion, by removing features from the “desktop” editions in the drive firmware. One of the features that today’s consumer desktop drives tend to entirely lack is configurable error timeouts, also known as SCTERC, also known as TLER.


If you use redundant storage but may be using non-RAID drives, you absolutely must check them for configurable timeout support. If they don’t have it then you must increase your storage driver’s timeout to compensate, otherwise you risk data loss.

How do storage timeouts work, and when are they a factor?

When the operating system requests from or write to a particular drive sector and fails to do so, it keeps trying, and does nothing else while it is trying. An HDD that either does not have configurable timeouts or that has them disabled will keep doing this for quite a long time—minutes—and won’t be responding to any other command while it does that.

At some point Linux’s own timeouts will be exceeded and the Linux kernel will decide that there is something really wrong with the drive in question. It will try to reset it, and that will probably fail, because the drive will not be responding to the reset command. Linux will probably then reset the entire SATA or SCSI link and fail the IO request.

In a single drive situation (no RAID redundancy) it is probably a good thing that the drive tries really hard to get/set the data. If it really persists it just may work, and so there’s no data loss, and you are left under no illusion that your drive is really unwell and should be replaced soon.

In a multiple drive software RAID situation it’s a really bad thing. Linux MD will kick the drive out because as far as it is concerned it’s a drive that stopped responding to anything for several minutes. But why do you need to care? RAID is resilient, right? So a drive gets kicked out and added back again, it should be no problem.

Well, a lot of the time that’s true, but if you happen to hit another unreadable sector on some other drive while the array is degraded then you’ve got two drives kicked out, and so on. A bus / controller reset can also kick multiple drives out. It’s really easy to end up with an array that thinks it’s too damaged to function because of a relatively minor amount of unreadable sectors. RAID6 can’t help you here.

If you know what you’re doing you can still coerce such an array to assemble itself again and begin rebuilding, but if its component drives have long timeouts set then you may never be able to get it to rebuild fully!

What should happen in a RAID setup is that the drives give up quickly. In the case of a failed read, RAID just reads it from elsewhere and writes it back (causing a sector reallocation in the drive). The monthly scrub that Linux MD does catches these bad sectors before you have a bad time. You can monitor your reallocated sector count and know when a drive is going bad.

How to check/set drive timeouts

You can query the current timeout setting with smartctl like so:

# for drive in /sys/block/sd*; do drive="/dev/$(basename $drive)"; echo "$drive:"; smartctl -l scterc $drive; done

You hopefully end up with something like this:

smartctl 6.4 2014-10-07 r4002 [x86_64-linux-3.16.0-4-amd64] (local build)
Copyright (C) 2002-14, Bruce Allen, Christian Franke,
SCT Error Recovery Control:
           Read:     70 (7.0 seconds)
          Write:     70 (7.0 seconds)
smartctl 6.4 2014-10-07 r4002 [x86_64-linux-3.16.0-4-amd64] (local build)
Copyright (C) 2002-14, Bruce Allen, Christian Franke,
SCT Error Recovery Control:
           Read:     70 (7.0 seconds)
          Write:     70 (7.0 seconds)

That’s a good result because it shows that configurable error timeouts (scterc) are supported, and the timeout is set to 70 all over. That’s in centiseconds, so it’s 7 seconds.

Consumer desktop drives from a few years ago might come back with something like this though:

SCT Error Recovery Control:
           Read:     Disabled
          Write:     Disabled

That would mean that the drive supports scterc, but does not enable it on power up. You will need to enable it yourself with smartctl again. Here’s how:

# smartctl -q errorsonly -l scterc,70,70 /dev/sda

That will be silent unless there is some error.

More modern consumer desktop drives probably won’t support scterc at all. They’ll look like this:

Warning: device does not support SCT Error Recovery Control command

Here you have no alternative but to tell Linux itself to expect this drive to take several minutes to recover from an error and please not aggressively reset it or its controller until at least that time has passed. 180 seconds has been found to be longer than any observed desktop drive will try for.

# echo 180 > /sys/block/sda/device/timeout

I’ve got a mix of drives that support scterc, some that have it disabled, and some that don’t support it. What now?

It’s not difficult to come up with a script that leaves your drives set into their most optimal error timeout condition on each boot. Here’s a trivial example:

for disk in `find /sys/block -maxdepth 1 -name 'sd*' | xargs -n 1 basename`
    smartctl -q errorsonly -l scterc,70,70 /dev/$disk
    if test $? -eq 4
        echo "/dev/$disk doesn't suppport scterc, setting timeout to 180s" '/o\'
        echo 180 > /sys/block/$disk/device/timeout
        echo "/dev/$disk supports scterc " '\o/'

If you call that from your system’s startup scripts (e.g. /etc/rc.local on Debian/Ubuntu) then it will try to set scterc to 7 seconds on every /dev/sd* block device. If it works, great. If it gets an error then it sets the device driver timeout to 180 seconds instead.

There are a couple of shortcomings with this approach, but I offer it here because it’s simple to understand.

It may do odd things if you have a /dev/sd* device that isn’t a real SATA/SCSI disk, for example if it’s iSCSI, or maybe some types of USB enclosure. If the drive is something that can be unplugged and plugged in again (like a USB or eSATA dock) then the drive may reset its scterc setting while unpowered and not get it back when re-plugged: the above script only runs at system boot time.

A more complete but more complex approach may be to get udev to do the work whenever it sees a new drive. That covers both boot time and any time one is plugged in. The smartctl project has had one of these scripts contributed. It looks very clever—for example it works out which devices are part of MD RAIDs—but I don’t use it yet myself as a simpler thing like the script above works for me.

What about hardware RAIDs?

A hardware RAID controller is going to set low timeouts on the drives itself, so as long as they support the feature you don’t have to worry about that.

If the support isn’t there in the drive then you may or may not be screwed there: chances are that the RAID controller is going to be smarter about how it handles slow requests and just ignore the drive for a while. If you are unlucky though you will end up in a position where some of your drives need the setting changed but you can’t directly address them with smartctl. Some brands e.g. 3ware/LSI do allow smartctl interaction through a control device.

When using hardware RAID it would be a good idea to only buy drives that support scterc.

What about ZFS?

I don’t know anything about ZFS and a quick look gives some conflicting advice:

Drives with scterc support don’t cost that much more, so I’d probably want to buy them and check it’s enabled if it were me.

What about btrfs?

As far as I can see btrfs does not disable drives, it leaves it until Linux does that, so you’re probably not at risk of losing data.

If your drives do support scterc though then you’re still best off making sure it’s set as otherwise things will crawl to a halt at the first sign of trouble.

What about NAS devices?

The thing about these is, they’re quite often just low-end hardware running Linux and doing Linux software RAID under the covers. With the disadvantage that you maybe can’t log in to them and change their timeout settings. This post claims that a few NAS vendors say they have their own timeouts and ignore scterc.

So which drives support SCTERC/TLER and how much more do they cost?

I’m not going to list any here because the list will become out of date too quickly. It’s just something to bear in mind, check for, and take action over.

Fart fart fart

Comments along the lines of “Always use hardware RAID” or “always use $filesystem” will be replaced with “fart fart fart,” so if that’s what you feel the need to say you should probably just do so on Twitter instead, where I will only have the choice to read them in my head as “fart fart fart.”

by Andy at November 09, 2015 09:06 AM

November 03, 2015

Steve Kennedy

Misfit Shine2 - it's not shiny

Misfit have produced the prettiest wearable for a while, the original Shine tracked steps and was one (if not the) first to do automatic sleep tracking. It was a small disk (27.5mm across and 3.3mm high) which contained the electronics and a changeable CR2032 coin cell which lasted for about 6 months. It came with a silicon strap and a silicon magnetic clip so it could be worn on the wrist or clipped on to a t-shirt, bra, shoes, trouser pocket or wherever suited the user. It's also possible to buy socks and t-shirts with a dedicated Shine pocket and a necklace too. It links back to the Misfit app (Android and iOS) using Bluetooth 4.0.

Now the Shine2 is out it's bigger (it's 30.5mm across and 8mm height) and comes in matt black (carbon back as Misfit describe it) and rose gold. It's also 50m water resistant. The original Shine had 12 white LEDs around the edge and the have been upgraded to RGB LEDs, there's also a 'buzzer' inside that can notify you of various things. The battery is still a CR2032 which should last for around 6 months and Bluetooth is now 4.1 which allows for faster data transfers.

The Shine2 can now wake you up by its buzzer (you set the time in the app), the original Shine had the smart alarm feature, but you'd need the phone by your side. It can now also notify you of calls and texts.

The strap and clip unfortunately don't feel as well made as the smaller ones with the original Shine, but then there'll probably be a slew of new accessories for you to spend more money with Misfit.

Having used the Shine2 for a day, sync'ing definitely seems faster, though you definitely notice the size increase.

Still a very pretty wearable compared to most.

It retails for $99.99 from the Misfit Store (they do ship to the UK using DHL so add shipping costs and import duties/VAT).

by Steve Karmeinsky ( at November 03, 2015 12:51 PM

November 02, 2015

Steve Kennedy

Ofcom tackles Pirate Radio

Ofcom, the Super regular has published a report on how it has worked with Haringey Council to remove equipment used for Pirate Radio broadcasting from buildings operated by the council. 19 stations were closed in 2014.

Ofcom and Haringey estimate that this has saved the council £90,000 in enforcement and maintenance costs. Ofcom is meeting with other councils on the 3rd of November to report their findings from the Haringey cooperation and if this is rolled out across London could save councils £1m per annum.

Though Pirate radio is illegal it can form a basis for community radio, unfortunately it can cause real issues and NATS has reported 55 incidents of interference from Pirate station. There have also been complaints from emergency services and licensed commercial users.

There are schemes in place for local broadcasters to legally broadcast and Ofcom has even allowed DAB stations to be set-up using off-the-shelf hardware and open source software which means a DAB station can be set-up for around £6,000. These use Linux and efforts from OpenDigitalRadio and commercially available software defined radios.

Pirate radio has been groundbreaking in the past and it will be a shame if all Pirate radio stations disappear, but if Ofcom genuinely allow more open access using local commercial DAB multiplexes maybe it won't matter.

by Steve Karmeinsky ( at November 02, 2015 07:18 PM

October 31, 2015

Jonathan McDowell

Thoughts on the LG G Watch R Android smartwatch

Back in March I was given an LG G Watch R, the first Android Wear smartwatch to have a full round display (the Moto 360 was earlier, but has a bit cut off the bottom of the actual display). I’d promised I’d get round to making some comments about it once I’d had it for a while and have failed to do so until now. Note that this is very much comments on the watch from a user point of view; I haven’t got to the point of trying to do any development or other hacking of it.

Firstly, it’s important to note I already was wearing a watch and have been doing so for all of my adult life. Just a basic, unobtrusive analogue watch (I’ve had a couple since I was 18, before that it was pretty much every type of calculator watch available at some point), but I can’t remember a period where I didn’t. The G Watch R is bulkier than what I was previously wearing, but I haven’t found it obtrusive. And I love the way it looks; if you don’t look closely it doesn’t look like a smart watch (and really it’s only the screen that gives it away).

Secondly, I already would have taken my watch off at night and when I was showering. So while the fact that the battery on the G Watch R will really only last a day and a half is by far and away its most annoying problem, it’s not as bad as it could be for me. The supplied charging dock is magnetic, so it lives on my beside table and I just drop the watch in it when I go to bed.

With those details out of the way, what have I thought of it? It’s certainly a neat gadget. Being able to see my notifications without having to take my phone out of my pocket is more convenient than I expected - especially when it’s something like an unimportant email that I can then easily dismiss by swiping the watch face. My agenda being just a flick away, very convenient, particularly when I’m still at the stage of trying to remember where my next lecture is. Having walking directions from Google Maps show up on the watch (and be accompanied by a gentle vibration when I need to change direction) is pretty handy too. The ability to take pictures via the phone camera, not so much. Perhaps if it showed me roughly what I was about to photograph, but without that it’s no easier than using the phone interface. It’s mostly an interface for consuming information - I’ve tried the text to SMS interface a few times, but it’s just not reliable enough that I’d choose to use it.

I’ve also been pleased to see it get several updates from LG in the time I’ve had it. First the upgrade from Wear 4.4 to Wear 5.1 (probably via 5.0 but I forget), but also the enabling of wifi support. The hardware could always support this, but initially Android Wear didn’t and then there was some uncertainty about the FCC certification for the G Watch R. I can’t say I use it much (mostly the phone is within range) but it’s nice to see the improvements in support when they’re available.

What about the downsides? Battery life, as mentioned above, is definitely the major one. Mostly a day is fine, but the problem comes if I’m ever away. There’s no way to charge without the charging dock, so that becomes another things I have to pack. And it’s really annoying to have your watch go dead on you midday when you forget to do so. I also had a period where I’d frequently (at least once a week) find an “Android Wear isn’t responding. Wait/Restart?” error on the watch screen. Not cool, but thankfully seems to have stopped happening. Finally there’s the additional resource requirements it puts on the phone. I have a fairly basic Moto G 4G that already struggles with Ingress and Chrome at the same time, so adding yet another thing running all the time doesn’t help. I’m sure I could make use of a few more apps if I was more comfortable with loading the phone.

The notable point for me with the watch was DebConf. I’d decided not to bring it, not wanting the hassle of dealing with the daily charging. I switched back to my old analogue watch (a Timex, if you care). And proceeded to spend 3 days looking at it every time my phone vibrated before realising that I couldn’t do that. That marked the point where I accepted that I was definitely seeing benefits from having a smart watch. So when I was away for a week at the start of September, I brought the charger with me (at some point I’ll get round to a second base for travel). I was glad I’d done so. I’m not sure I’m yet at the point I’d replace it in the event it died, but on the whole I’m more of a convert that I was expecting to be.

October 31, 2015 03:06 PM

October 27, 2015

David Cantrell