planet.uknot.org

June 30, 2016

Ian Christian

Huel Coffee Recipe – a fantastic mix

This morning I thought I’d try adding a coffee into my Huel, and I was presently surprised.

I’ve always steered clear of the idea of hot Huel – I found the thought of it quiet off-putting.   However, this morning I was feeling a little adventurous!

I was surprised at how nice it was, so I thought I’d share the recipe (if you could call it that).

 

Coffee Huel Recipe

Step 1:

  • 300ml water – I used room temperature water, which gave me a warm drink, I’ve not tried it with hot water yet.
  • 100g of Huel (or 101g in my case!)
  • 1 Lungo Nespresso coffee

Step 2:

Then mix, blend, shake, whichever you prefer.

Step 3:

Enjoy.

P6300030

P6300033

I will be publishing more recipe posts soon.

What’s been your favorite recipe?

 

The post Huel Coffee Recipe – a fantastic mix appeared first on Pookey's Health Blog.

by pookey at June 30, 2016 08:16 AM

June 29, 2016

Jess Rowbottom

Memories Of The Future

One of the longest-serving computers I have kicking around is a shell server which since 1993(ish) has hosted my email, domain names, and websites.

Of course it’s been patched and reinstalled but I never bothered actually clearing the configurations up to remove old projects. Yesterday it suffered a bit of a meltdown and I took the initiative to finally do some tidying, a process which largely involved manually going through everything bit-by-bit, line-by-line.

As much as life has changed over the past 25 years, nothing quite prepared me for watching it all be deconstructed in slow motion, long-dead projects being removed and finally consigned to an archive, leading me to consider how much I miss the days when we were flying by the seat of our pants.

Newsboy… Namegate… Geeksearch…

When I worked for Internet service provider Mailbox Internet in the late 1990s we were particularly adept at having mad brainwaves, led by my boss at the time. “Drinkie?” he’d ask, “I’ve had an idea!” The alcoholics amongst us (of which there were several) would head to Joe’s Brasserie for a Sauvignon Blanc on the company tab, accompanied by fevered discussion of why this particular lightbulb moment would or wouldn’t work.

Every project was given a designation, prefixed with “HBS” and which stood for “Hare-Brained Scheme”. Some HBS numbers made it into full projects, some were destined to remain as a couple of lines scribbled on the back of a torn beermat. The most infamous was HBS768, a company called Nomination which in turn became CentralNic – I still have the box file somewhere in the loft.

Snyde… Fury.net… Ispex…

1993_colo

Mailbox Colo Centre

One night we drunkenly gave thousands of dialup customers a load of webspace while logged in from the Southern Cross pub on a Nokia 9000. Another time we built a data centre which ended up full of every kind of Frankenstein’s monster of servers; it suffered the most horrific air conditioning issues yet gave rise to websites which are now household names.

The server I was now reworking saw its fair share of action in that data centre. Once upon a time it had been actively involved in work for both independent and ICANN-sanctioned domain name registries. It still had configurations to handle email for a shedload of companies whose email servers were temporarily offline and had never officially told me it had been fixed (and at least three domains whose email was still being handled there, a decade later).

MusicResources… Shopgate… Printshed…

The configurations led me out of the Dot Com era working for an ISP and into my life as an IT contractor, providing emergency hosting in times before virtualisation and containers became the norm. It hosted household names (including, bizarrely, Aquafresh), and a proof-of-concept for the Sky TV programme guide, written in perl. Strange weird and wonderful ideas – I mean, who the heck thought a database of tapas restaurants might be a good idea?

I now work with a woman who was born in the same year we initiated several of those projects – she’s a front-end developer who never knew a time where we didn’t have always-on connectivity of broadband. While she can spin up a VPS and get a project deployed in a few hours, she doesn’t know the joy of pointing at tin and proudly saying, “Look! These are our servers!”. Don’t get me wrong, it’s a blessing that we don’t need to worry about maintenance and suchlike, but it doesn’t feel quite the same pointing at an SSH login.

Cashbase… MI5… Listbunny…

Finally, it felt I was leaving a house which had been home for decades. As I finished clearing everything up, there was a pang of regret as though I was quietly putting the lid on the Wild West era, of the Dot Com Revolution. Will there be another time like it? My old boss still allocates HBS numbers, so who knows…!

No flowers.

 

Header photo by Shish Batal, who sneakily took a photo of me fixing the emergency pager system at Mailbox back in 1998.

by Jess at June 29, 2016 11:56 AM

June 28, 2016

Alex Bloor

New Parody Song, after 3 years! “We Didn’t Vote for Brexit”

Based on “We didn’t start the fire” by Billy Joel, here’s what almost an entire weekend’s solid work yields! As always, lyrics and vocals by me. Hope you like it. Please share if you do! It’s topical at the moment, … Continue reading

by Alex Bloor at June 28, 2016 05:07 PM

Mark Goodge

Evesham Traffic

Evesham Civic Society is having a meeting this evening to discuss various proposals for improving the traffic flow in the town centre, and in particular restoring two-way access from Workman Bridge to/from the High Street. I can’t be there, as I have another meeting to attend at Wychavon, but I thought I’d stick my oar in anyway with some comments on the suggested solutions.

You need to read these in conjunction with the PDF created by the Civic Society. Since the suggestions aren’t otherwise numbered, I’ll refer to them by the names of the proponents in the order they appear on the PDF.

Alan Pye

This makes Swan Lane and Mill Street two-way and reverses the flow in Oat Street and Chapel Lane.

The main problem with this, as with any scheme which seeks to restore two-way traffic on Swan Lane, is that doing so would almost certainly mean losing the on-street parking. That’s not going to be popular with local residents who don’t have anywhere else to park. It also has two exits onto High Street, one from Swan lane and one from Oat Street, in very close proximity. That’s unlikely to be practical, especially as these locations won’t be suitable for mini-roundabouts. There are also questions about whether the hill section of Mill Street can take two-way traffic, although that could potentially be addressed by a priority system.

On the other hand, it has the advantage of keeping two lanes of traffic onto High Street, albeit on two different streets. That’s important, for reasons I’ll explain below.

Anthony Dowling

This system simply reverses the flow of Swan lane and Oat Street, with Mill Street being made two-way. It has the advantage of not needing any changes to on-street parking, and is one of the most commonly proposed solutions on social media.

However, it still has the issue of two exits onto High Street close together, as Avon Street will still be there. More importantly, it halves the capacity of the exit from the east onto the High Street. At the moment, there are two lanes of Westbound traffic in Swan lane, meaning that when the lights are green, two vehicles at once can exit the junction. From Oat Street, only one at a time would be able to do so. That’s either going to mean longer delays for High Street traffic, or longer tailbacks in westbound traffic, or both. As things stand, the tailbacks in Swan Lane reach Chapel Street at peak times. If all that traffic had to use Oat Street, it would reach much further back.

Schemes involving two way traffic on Swan Lane and/or reversing the flow in Oat Street could work if the parking issue was considered unimportant, and if the volume of vehicles leaving the zone to the west (over Workman Bridge) was high enough to significantly mitigate the loss of eastbound capacity. But I suspect that neither of these would be the case.

James Fleck

This system retains the existing one-way flow in Oat Street, Chapel Street and Swan Lane, but makes Mill Street two way. As such this is by far the simplest suggested solution, and avoids all the gotchas inherent in changing Swan Lane and Oat Street.

My main caveat for this is, again, the hill section of Mill Street. I have a feeling it may not be easy to create a road layout that allows two-way traffic here, especially if buses are still allowed to use this route (and banning them would significantly affect route patterns).

I also think that a mini-roundabout at the Bridge Street/Mill Street junction wouldn’t work, as the junction there also has to cater for Monks Walk which is somewhat offset from Mill Street. However, traffic lights would work well enough here, so that’s not a problem.

James Powell

This is just plain barking.

Kate Gardner

This is, effectively, the same as Alan Pye’s proposal, with the exception that it allows two-way traffic on Chapel Street.

As such, it suffers from the same issues as Alan Pye’s scheme, but with the added disadvantage that the bus stop and parking on Chapel Street would also need to go!

Mark Goodge

This is my suggested scheme. Rather than go into it in detail here, you can see the full article elsewhere on my website.

Phil Cooper

This is much the same as James Fleck’s proposal, with only minor differences at the junctions. The main difference is making the entrance to Rynal Place one way, presumably in order to prevent the use of Lancaster Grove as a rat run. I suspect that this would be unpopular with residents of the Rynals, though, as it would force them to go out onto the High Street in order to head west.

None of these schemes are perfect (not even mine!). They all have drawbacks of one form or another. Which is one of the reasons why all of them are impossible to implement without detailed traffic data and computer simulations. Fortunately, that data collection is now in progress, so we should have some idea in the not to distant future of what is and is not practical. Let’s just hope that some solution to two-way traffic between Workman Bridge and High Street is on the cards after the computer has done its stuff.

by Mark at June 28, 2016 01:14 PM

The next Prime Minister

Nominations for the Conservative Party leadership election – and, by extension, the internal election for the next Prime Minister – open today and close on Friday. Conservative MPs will whittle down the contenders to a final two, who will then be voted on by the membership as a whole. The precise timetable after that depends on the number of candidates, but we should know the winner by late August or early September.

As a party member, I will have a vote in the final ballot. I’m not going to say who I’ll be backing until I know who I have to choose between. But these are some of the principles which will guide my choice.

Firstly, it needs to be someone who can unite the party and the country.

That may sound like a meaningless platitude which will be uttered by every candidate. But it isn’t, and it matters. We’ve just had a very bruising referendum contest, and it’s important that the new leader is someone who can work with both sides. Someone who will appoint ministers on the basis of ability, not cronyism and patronage.

David Cameron has won two general elections by appealing to the centre ground of British politics. That, too, is an essential attribute of his successor. We need someone that the floating voter is comfortable floating towards. It isn’t enough to rely on Labour’s shortcomings to win an election. We need to be able to offer a positive choice to the ordinary, non-political voter. More than ever, a post-EU Britain needs a one nation government.

Secondly, we need someone who appreciates and encourages the work of grassroots Conservatives at association and branch level.

The new leader of the Conservative Party won’t just be responsible for the parliamentary party. It needs to be someone who recognises that politics doesn’t just happen in Westminster, but in county halls, town halls and civic centres across the country. Someone who makes it easy for me and my colleagues to be proud of what we stand for.

The timetable for any possible hustings will be short, but I want to see the final two candidates making a strong effort to reach out to ordinary members and telling us directly why we should vote for them. That’s another reason why I’m not backing any specific candidate yet. I want to hear what they actually have to say to us.

Finally, I want someone who I can trust with the things that matter to me.

Obviously, every party member will have their own priorities which will reflect their own experience and circumstances. But my choice will be influenced by the things that I care about: a strong commitment to civil liberty and freedom of conscience, an understanding of technology and the value to the UK’s economy of an open Internet, a bias towards evidence-based policy-making, and a preference for localism over a one-size-fits-all approach.

The decisions we will make over the next few weeks will have long-lasting ramifications.

Under normal circumstances, party leadership elections are held at on a timetable intended to give the new leader time to settle in before any really difficult decisions need to be made. That won’t be the case this time.

The incoming Prime Minister will need to get straight to work on our negotiations with the EU. An early general election is also on the cards. That makes it all the more important that we in the party think long and hard about who we want in that role. The potential difference in outcome between the right and wrong choice could not be more stark.

by Mark at June 28, 2016 09:01 AM

June 27, 2016

Jonathan McDowell

Hire me!

It’s rare to be in a position to be able to publicly announce you’re looking for a new job, but as the opportunity is currently available to me I feel I should take advantage of it. That’s especially true given the fact I’ll be at DebConf 16 next week and hope to be able to talk to various people who might be hiring (and will, of course, be attending the job fair).

I’m coming to the end of my Masters in Legal Science and although it’s been fascinating I’ve made the decision that I want to return to the world of tech. I like building things too much it seems. There are various people I’ve already reached out to, and more that are on my list to contact, but I figure making it more widely known that I’m in the market can’t hurt with finding the right fit.

  • Availability: August 2016 onwards. I can wait for the right opportunity, but I’ve got a dissertation to write up so can’t start any sooner.
  • Location: Preferably Belfast, Northern Ireland. I know that’s a tricky one, but I’ve done my share of moving around for the moment (note I’ve no problem with having to do travel as part of my job). While I prefer an office environment I’m perfectly able to work from home, as long as it’s as part of a team that is tooled up for disperse workers - in my experience being the only remote person rarely works well. There’s a chance I could be persuaded to move to Dublin for the right role.
  • Type of role: I sit somewhere on the software developer/technical lead/architect spectrum. I expect to get my hands dirty (it’s the only way to learn a system properly), but equally if I’m not able to be involved in making high level technical decisions then I’ll find myself frustrated.
  • Technology preferences: Flexible. My background is backend systems programming (primarily C in the storage and networking spaces), but like most developers these days I’ve had exposure to a bunch of different things and enjoy the opportunity to learn new things.

I’m on LinkedIn and OpenHUB, which should give a bit more info on my previous experience and skill set. I know I’m light on details here, so feel free to email me to talk about what I might be able to specifically bring to your organisation.

June 27, 2016 10:21 PM

Ian Christian

Breville Active-Blend Pro / Black Review

In a previous review of the Breville Active-Blend, however since then they have released the new Active-Blend Pro.  If you’re wondering how the new VBL120 model compares to the previous model (VBL062) then this will answer your questions!  This is a fantastic blender for the price, and looks a lot smarter than it’s previous model.

The Improvements

Many of the features and functions of this blender are identical to it’s previous model.  The body of the blender is now black and looks far slicker.

Despite the lid on the previous model never leaking in my experience, the new one seems much improved (see pictures below).  Also the bottle now has a rubber grip, improving the look and feel.  The bottle is slightly smaller than those included previously, this one is 500ml vs the 600ml on the former model.

The blades and the bottles from the VBL062 (the old model) still fit in the new model, so you can use them as extras or spares if you’re upgrading.

There’s a subtle difference in the angle of the blades on the new blender – presumably this provides a better and more consistent blend.  An advantage of the Pro version is improved dry blending, which I imagine is part of this slight alteration.

How does it perform?

I am not sure how this performs compared the the Nutribullet, but at a fraction of the price it does everything I need from a blender.

On unboxing the Active-Blend, I decided to test it with some nuts, and some ice.  I was impressed – within a second the nuts were dust! The blender didn’t struggle at all with ice either, crushing it easily.

The blender is 300W, just like it’s predecessor.  It makes just about the same amount of noise, perhaps a little quieter.

The video shows it easily crushing ice, and making a fine powder from some mixed nuts.

Pictures

The new lid with improved seal insulation sleeve over bottle new bottle with rubber grip new blades vs old blades new and old blender The two blenders side-by-side

Conclusion

If you already have a blender, or the previous model Active-Blend, there’s no compelling reason to upgrade.  However if you’re like me and just wanted a better looking blender that doesn’t look a little bit like a kids toy, the Breville Active-Blend Pro is available from amazon.

The post Breville Active-Blend Pro / Black Review appeared first on Pookey's Health Blog.

by pookey at June 27, 2016 08:21 PM

Mark Goodge

Two tribes go to work


Following the EU referendum, there are, broadly speaking, four groups of people in the UK:

Group A – Hardcore Remain voters, who are not only unhappy with the result but are unwilling to accept the outcome and insist on either flinging insults at Leave voters or actively trying to overturn the result (or both).

Group B – Moderate Remain voters, who are disappointed with the result but are willing to respect the outcome and now want to ensure that any negative effects on a post-EU UK are minimised.

Group C – Hardcore Leave voters, who see this as an opportunity to gloat at their opponents, who don’t care about reconciliation and want to take this opportunity to impose their will on a post-EU Britain.

Group D – Moderate Leave voters, who are pleased to have won but recognise that they only have a slim majority, that there are a lot of people who disagreed with them, and that those views should still be heard.

Whichever way you voted in the referendum, I hope it’s obvious that the B and D “moderate” groups are the ones acting in the UK’s best interests (and, for that matter, the EU’s). Sensible, intelligent people need to cooperate to make sure that the UK’s relationship with the EU is renegotiated to provide the best possible outcome for all parties.

If you’re a Remain supporter, that’s obviously going to be sub-optimal to staying in, but it’s still possible to make the most of a less than ideal situation. If you voted Leave, then compromising on some of your ideals will be worth it to ensure a smooth transition to a post-EU Britain.

It’s time for the two sensible tribes to work together. Ignore the ranters and ravers, the xenophobes and anti-democrats, and concentrate on the future rather than the past. Our future depends on it.

by Mark at June 27, 2016 04:14 PM

June 23, 2016

Ian Christian

30 Grams of Protein Within 30 minutes of Waking

I’ve been training hard every morning for the last 6 weeks, doing Insanity while on my Huel diet.  Progress has been great, however I wanted to see if I could find a way of improving my performance.  Tim Ferriss had a potential solution…


The 4 Hour Body

The 4 Hour Body by Tim Ferriss is a fantastic book, and it suggests something called 30 in 30 – that is, 30 grams of protein within 30 minutes of waking.  Tim claims that making this change in your diet can hugely improve weight loss; giving an example of his father losing 8.5kg in a month by making this change!

Whether you believe this or not, I make 2 suggestions

What does 30g protein looks like?

  • 5 large eggs
  • just over half a tin of tuna
  • 300g of cottage chese

That’s quite a lot of food to try for first thing in the morning!  However, simply having 100g of Huel will give you 30g protein along with the vitamins, fats and carbs to balance the diet out.  Huel is a powered, nutritionally complete food so isn’t just a protein powder (see below for discount code).

I find that I can exercise within 30 minutes of having Huel, so I could have it as soon as I woke up, have a coffee, re-hydrate and then do my morning workout.

Previously I was eating after exercising, but as it was such a simple change to try out, I gave it a go!

The Results

I log most of my workouts using a MZ-3 MyZone belt, and with Insanity repeating workouts it was simple for me to compare before and after.

Here’s the before:

Insanity - before

And after:

Insanity - After 30 grams of protein in 30 minutes of waking

The results were quite impressive! My average effort improved 5%, more calories were burnt, and I was working in my maximum heart rate zone for much more of the time.

Also, whether it was due to the increased effort of exercise or because of the 30 grams of protein, my body was like a furnace on the drive to work after.  The air conditioning had to be lowered 2 degrees more!

If you fancy giving Huel a try – use this link to receive £5 off your first orderhttp://r.sloyalty.com/r/uK8hofmOf5F5I highly recommend the 4 Hour Body book by Tim Ferriss

The post 30 Grams of Protein Within 30 minutes of Waking appeared first on Pookey's Health Blog.

by pookey at June 23, 2016 07:41 PM

Jonathan McDowell

Fixing missing text in Firefox

Every now and again I get this problem where Firefox won’t render text correctly (on a Debian/stretch system). Most websites are fine, but the odd site just shows up with blanks where the text should be. Initially I thought it was NoScript, but turning that off didn’t help. Daniel Silverstone gave me a pointer today that the pages in question were using webfonts, and that provided enough information to dig deeper. The sites in question were using Cantarell, via:

src: local('Cantarell Regular'), local('Cantarell-Regular'), url(cantarell.woff2) format('woff2'), url(cantarell.woff) format('woff');

The Firefox web dev inspector didn’t show it trying to fetch the font remotely, so I removed the local() elements from the CSS. That fixed the page, letting me pinpoint the problem as a local font issue. I have fonts-cantarell installed so at first I tried to remove it, but that breaks gnome-core. So instead I did an fc-list | grep -i cant to ask fontconfig what it thought was happening. That gave:

/usr/share/fonts/opentype/cantarell/Cantarell-Regular.otf.dpkg-tmp: Cantarell:style=Regular
/usr/share/fonts/opentype/cantarell/Cantarell-Bold.otf.dpkg-tmp: Cantarell:style=Bold
/usr/share/fonts/opentype/cantarell/Cantarell-Bold.otf: Cantarell:style=Bold
/usr/share/fonts/opentype/cantarell/Cantarell-Oblique.otf: Cantarell:style=Oblique
/usr/share/fonts/opentype/cantarell/Cantarell-Regular.otf: Cantarell:style=Regular
/usr/share/fonts/opentype/cantarell/Cantarell-Bold-Oblique.otf: Cantarell:style=Bold-Oblique
/usr/share/fonts/opentype/cantarell/Cantarell-Oblique.otf.dpkg-tmp: Cantarell:style=Oblique
/usr/share/fonts/opentype/cantarell/Cantarell-BoldOblique.otf: Cantarell:style=BoldOblique

Hmmm. Those .dpkg-tmp files looked odd, and sure enough they didn’t actually exist. So I did a sudo fc-cache -f -v to force a rebuild of the font cache and restarted Firefox (it didn’t seem to work before doing so) and everything works fine now.

It seems that fc-cache must have been run at some point when dpkg had not yet completed installing an update to the fonts-cantarell package. That seems like a bug - fontconfig should probably ignore .dpkg* files, but equally I wouldn’t expect it to be run before dpkg had finished its unpacking stage fully.

June 23, 2016 02:23 PM

June 22, 2016

Mark Goodge

After the referendum

This time tomorrow morning, we will be voting in the EU referendum. By this time the day after that, we should know the result.

So, what happens next?

Obviously, that depends on what the result is. But this is what I want to see, for both options.

If we vote to Remain

Firstly, it’s essential to bear in mind that voting to stay is not voting for the status quo. Nor is it an endorsement of every aspect of the EU and everything that it does. The EU is horribly broken and dysfunctional in very many ways. If we are staying in it, we need to take the lead both in highlighting the problems and coming up with ways to address them.

Secondly, a choice to remain is not an endorsement of the Remain campaign. Some of the ad hominem attacks on Leave campaigners have been truly appalling. If the Remain campaign is victorious, more than anything it needs to follow this up by being gracious.

A vote to stay is not a rejection of the need to change. It just means change in a different way to leaving.

The long term future of the EU needs to work for everybody, not just those who wholeheartedly buy into its vision. That means taking the criticisms of the EU levelled by the Leave campaign seriously, and seeking to address those from within the structure.

If we vote to Leave

A vote to leave is a step into the unknown. But that doesn’t mean it has to be a leap in the dark. The key priorities of the government over the weeks, months and even years that follow a decision to leave will be about how best to secure the long-term interests of the UK.

There are many possible routes forward if we leave. Some of them are as different as the choice between leaving and remaining. And even if we leave, the opinions of those who voted to stay are still relevant in that debate.

A leave vote means a majority want out of the EU. But that doesn’t necessarily mean a majority want out of free trade, or free movement of people, or cross-border consumer protection.

Voting to leave isn’t the end of a process. It’s the start of one. The start of a new Europe that better serves the needs of all the countries in Europe, whether in or out of the EU.

Whatever we choose

No matter which vision for our future wins, both sides have to accept the result.

No carping, no complaining about the other side taking liberties with the campaign. No conspiracy theories. No accusations of ballot rigging. No subtle (or unsubtle) undermining of the will of the majority. No grudges.

Whatever happens, we have to move on. This has been a deeply unpleasant campaign, with very little to be proud of on either side. It’s time to put that behind us and make a commitment to making this decision work. For all of us.

by Mark at June 22, 2016 07:12 AM

June 20, 2016

Ian Christian

Huel and Insanity – a healthy combination

I’ve made some really great progress I’m really proud of, and I wanted to share it. This  post serves as extra evidence that using Huel as part of a healthy lifestyle can make a huge difference to your life.

For the last 5 weeks I have been combining 60% Huel diet, with the Insanity programme from Shaun T.  If you don’t know what Huel is, it’s a powdered replacement for food, which aims to provide all the fats, carbs, protein and vitamins you required.  Check out my previous article for more information (http://pookey.co.uk/wordpress/archives/677-huel-nutritionally-complete-food-replacement).

My trial has resulted in:

  • Rapid fat loss
  • Weight loss (this wasn’t a primary goal)
  • Increased energy levels
  • reduction in food bills
  • much more spare time

Have a look at the difference in just 5 weeks!

Huel Before and After Photo

Huel Before and After Photo

Additional changes I’ve made in lifestyle (details not covered here – I might put together a full detailed programme in future) have resulted in additional benefits of:

  • less need for sleep
  • clearer head space
  • an increase in productivity

Whilst undergoing my changes, I have been able to maintain a relatively relaxed diet too. On Monday I was at a wedding, and despite eating a lot of cake, cheese, hog roast and BBQ (seriously, it was shameful) I have still continued to loose weight and improve fitness!

My Schedule

My typical week day is:

  • wake up around 6:45
  • espresso, water
  • Insanity as directed by the programme
  • get to work for 9am.
  • 9am – first meal, Huel – 2 scoops
  • 11am – second meal, Huel – 2 scoops
  • 1pmish, when the weather is good enough, 6000 step walk
  • 2pm – 3pm – third meal, Huel – 2 scoops
  • 4pm – 5pm – forth meal, Huel – 2 scoops
  • 6pm – 7pm – dinner – ‘normal’ food, generally healthy but pasta, rice, potatoes have remained on the allowed list.

Typical weekends are hard to define. Insanity Saturday morning, usually Huel for breakfasts, but I tend to eat out, and eat chaotically.

I am now increasing my Huel intake to add an additional 2 scoops throughout the day (I’m adding at meal 1 and 3) as I do not want to loose any more weight.

The Results

Below you can see both weight and fat loss in the last 5 weeks, with measurements being taken twice a day, except where I’ve been away.  The graphs are from the Withings Health app, and I’m weighing using the Withings WS-50 scales (which are fantastic by the way!)

My Weight loss on Huel and Insanity My Weight loss on Huel and Insanity

If you fancy giving Huel a try – use this link to receive £5 off your first orderhttp://r.sloyalty.com/r/uK8hofmOf5F5

Huel Discount Code

The post Huel and Insanity – a healthy combination appeared first on Pookey's Health Blog.

by pookey at June 20, 2016 11:40 AM

Huel – a nutritionally complete food replacement

I recently gave a talk about Huel – a nutritionally complete food replacement, covering the benefits and how it compered to my Clean Eating Challenge I did a year ago.   About 60% of my diet is Huel these days, and so I’m a huge fan of it.  I hope the speech below helps you to understand why!

If you want to give Huel a try – this link below will give you a £5 discount code.  At the time of writing, your first order will also come with a free t-shirt and shaker!

Huel Discount Code


My Talk on Huel

Do you sometimes miss lunch or breakfast because you don’t have time?

Do you know how many calories you ate yesterday?

Do you know the ratio of proteins, fats and carbohydrates?

Last year, I enrolled on a ‘Clean Eating Challenge’ course – an 8 week course teaching you about nutrition.

The goal was to eat food as close to how nature intended it. This means no pasta, no bread, and certainly no cake! No extra calories were allowed to come from drinks, so this included alcohol.
I was to eat 40% of my daily calories as carbs, 30% protein, and 30% fats – this is a common macronutrient ratio used by athletes. This was to be done while consuming 5 or 6 smaller meals throughout the day rather than 3 large meals.

The first meeting we were weighed, measured, our body fat percentage was calculated, and a fitness test to benchmark ourselves was undertaken.
Every week focused on a different lesson and by the end of week one, I had already learnt a huge amount. Everything I ate was measured, weighed, and input into a food tracking app called ‘MyFitnessPal’.

I was struggling to eat higher levels of protein without hitting high fat content, so I invented a ratio I called the ‘PF Ratio’ – protein to fat. I knew that given my targets I needed to hit a PF ratio of 2.3 across the day. Tuna has a PF ratio of 6 – fantastic, if only I like tuna. Chicken had a PF ratio of 8.9 – so I was eating plenty of chicken!

As the weeks went on, I was learning more and more about what I was eating, and was able to balance my meals with less and less effort.
Sunday had turned into ‘food prep day’ – I spent most of the day shopping for food, weighing it, cooking it, separating it into portions – my kitchen was overrun with tupperwear dishes. The key to success was preparation, and preparation took so much time!

It’s amazing how much food you can eat when you cut out the junk food. I was meant to be eating somewhere between 1800-1900 calories a day, and sometimes eating enough food was a challenge!

So, did this diet work?

For me, this diet worked wonders. I lost 3kg in the 8 weeks, and my body fat dropped significantly too. My definition was vastly improved, the before and after photos were borderline unbelievable for 8 weeks.

However – it was a lot of work. Hitting the right ratios of carbs, fats and protein was an awful lot of work. I never even considered if I was getting the correct levels of fibre, vitamin C, Omega 3…. That level of detail and planning would be almost impossible!

It wasn’t just about the food though, I was training 5 – 6 times a week. My life for the 8 weeks became exercise, food preparation, and eating with little time for much else.

I thought to myself, there has to be a simpler way. As it turns out, I’m not the only person who’s thought this.

In February 2013, a software engineer in California, Rob Rhinehart, posted a blog article started:

Food is the fossil fuel of human energy. It is an enormous market full of waste, regulation, and biased allocation with serious geo-political implications.’
’ In my own life I resented the time, money, and effort the purchase, preparation, consumption, and clean-up of food was consuming

I could certainly relate to that resentment at the end of my clean eating challenge. I’m also horribly guilty of food waste. When was the last time you binned something that went out of date before you got round to eating it? A charity “Wrap” claims that 15 million tonnes of food is wasted in the UK every year– 50% of which is household waste.

In this blog article, he writes ‘I haven’t eaten a bite of food in 30 days, and it’s changed my life.’

As an engineer, his thoughts were that the body doesn’t need food as such, what it needs is the chemicals and elements within it.

So what happens if you combine all the elements in as near to neat form as you can, mix them with water, and drink it instead of eating?

This was the birth of Soylent.

Do you want to know what happened when I gave up eating for a week and lived off ‘food replacement’?

Nothing.

I was disappointed, I thought I’d have a story to tell – but the result was simply that the product did exactly what it said on the tin. I wasn’t hungry, I felt great, and there was no change in body fat or weight. The experience was completely uneventful.

Unfortunately Soylent isn’t available in this country, so I tried a few brands. I’ve eventually settled on a product called Huel. It provides a 40/30/30 macronutrient split, just like my clean eating diet aimed for.  Huel was created by a UK entrepreneur called Julian Hearn with a similar goal to Soylent.

So, what are the benefits?

  • No more food waste, the product lasts for months
  • Lower environmental impact, Huel is vegan.
  • I avoid bad food choices. When hungry it’s so easy to grab a burger, but this leaves no excuses
  • I save money. For under £40 a week I am getting a balanced high protein diet. My clean eating challenge focused on high quality food, which was costing a fortune.
  • I feel good all the time. No lows, no highs, no bloated feeling for having eaten too much.
  • I save time. This is the biggest impact. At home from work at 17:30, by 17:40 I had had my dinner and done the washing up. I estimate a 2 hour saving a day, over a week that’s 14 extra hours. Almost an entire extra days’ worth of time.

These days, I still eat food. I eat it socially, and when there’s a healthy food choice at hand – but the convenience of Huel often wins, and I estimate that 60% of my intake now comes from Huel.

If you want to save time, and feel great, why not give Huel a try?


You can get a discount on Huel by using this link: http://r.sloyalty.com/r/uK8hofmOf5F5

If you want to see the radical change Huel and exercise had on me in just 5 weeks, have a look at my article Huel and Insanity – a healthy combination

The post Huel – a nutritionally complete food replacement appeared first on Pookey's Health Blog.

by pookey at June 20, 2016 05:40 AM

June 19, 2016

Ian Christian

Someone wants to kill me – A hilarious scambaiting conversation

6 months ago I received a spam email, which usually I’d simply ignore but this one sparked my interest:

I want you to read this message very carefully, and keep the secret with you till further notice, You have no need of knowing who i am, where am from,till i make out a space for us to see, i have being paid $50,000.00 in advance to terminate your life

Yesterday I was working on my website and I came across a scambaiting conversation with Igor the Assassin from around Christmas last year.  Re-reading the story I’d forgotten how funny it was (even if I do say so myself!).  Read the funny exchange here. I still can’t quite believe how it ended!

The post Someone wants to kill me – A hilarious scambaiting conversation appeared first on Pookey's Health Blog.

by pookey at June 19, 2016 06:56 AM

June 15, 2016

Steve Kennedy

Plazmatic X dual beam lighter

There are lighters and then there are lighters, the Plazmatic X dual beam lighter definitely falls into the second category.

The lighter is 7.3cm x 3.6cm x 1.25cm which makes it about twice as high as it is wide (which feels slightly wrong, being used to a Zippo to size), this is probably needed to fit a decent battery and the high voltage electronics.

Amazingly the lighter doesn't use any fuel (no gas or petrol to worry about) as it uses plain old electricity to produce a dual plasma beam. Say lighting a cigarette, you hold it near the beam and suck and the beam bends and quickly ignites the cigarette. Very satisfying experience. It's also mean to be able to light all sorts of other smokables (no judgement here).

The lighter can also be used in all sorts of weather conditions including wind as the plasma beam will still be generated (as long as there's a charge of course).

A full charge will give 50-100 lights (though it's so pretty that it's likely people will just click on the button to see the effect. It charges via a micro USB port (USB cable supplied) and charges to full in about 2 hours.

The lighter comes in various finishes (though they seem to be skins attached to the same base model rather than say anodising the body itself).

The lighter is available directly from the Elementium website for $59.95 (there's free shipping at the moment to the US). This may seem a lot for a lighter, though never having to buy fuel should offset it.

Rate: 9/10 (only because of the case).

by Steve Karmeinsky (noreply@blogger.com) at June 15, 2016 06:02 PM

Mark Goodge

Referendum musings

It’s just over a week to go to the EU referendum, so I thought I’d jot down a few thoughts.

To begin with, let me say what this post is not. I will not tell you how to vote, or try to persuade you which way to vote. And I will not tell you which way I will vote.

There are, however, a number of things which need to be said.

Lies, damned lies and the battle for your vote

The first is that, by and large, the conduct of both sides has been utterly appalling. That doesn’t apply to every individual involved in either campaign, and very many of the grassroots activists on both sides – of which I count many, again on both sides, as friends – have been doing their best to argue their cause in a reasonable manner.

But, still, many of the headline claims made by the leaders of both sides are, at best, pure speculation dressed up as fact or, at worst, outright lies.

We’re doomed, I tell you, all doomed

On the Remain side, the increasingly shrill warnings of economic disaster, rise in terrorism and imminent collapse of civilisation simply do not ring true. By all means, consider a worst case scenario. But a description of the most extreme possible outcomes has to go hand in hand with a realistic assessment of the risk.

If it rains hard enough for long enough, my house could be flooded. We are on the edge of the “thousand year” flood zone. But, realistically, that is very, very unlikely to happen, at least in my lifetime or that of anyone I eventually sell the house to.

The same applies to predictions of what might happen if we leave the EU. It could, theoretically, result in economic disaster. But just saying that it could is insufficient information. To be useful, that has to be part of an overall risk assessment with outcomes ranging from best case to worst case, and with informed and expert predictions of how likely all the various possible outcomes are.

It doesn’t add up

On the Leave side, the repeatedly bandied figure of £350m a week paid to the EU is simply false. And arguing that that’s the right figure to use, because it’s what we would pay if it wasn’t for the rebate, is meaningless. That’s like saying that if I buy a shirt from M&S at 30% off, I should still assess the cost on what I would have paid without the discount. My accountant would laugh at me if I did that.

It’s also simply wrong to say that, whatever we pay to the EU, we could instead spend it on the NHS if we left. That disregards all the money that is currently spent by the EU on things within the UK. The absolute most we could spend on the NHS if we left is the total net cost of EU membership – which is a heck of a lot less than £350m a week. But even that disregards the possibility that leaving the EU may incur other costs which also have to be met. Realistically, this is simply an impossible promise.

Little Britain and Big Brother

The Remain argument that the UK would have to adopt something like the “Swiss model” or the “Norwegian model” to get access to the benefits of the single market if we leave is equally specious.

The UK has a population of 64 million, and a GDP of $2,768 billion (measured in USD as that’s the common unit of comparison). Norway has a population of 5 million, and a GDP of $513 billion. Switzerland has a population of 8 million, and a GDP of $685 billion. In other words, the UK has a population nearly five times that of Norway and Switzerland combined, and a GDP more than double their combined total.

If we leave the EU, we will not need either a “Swiss model” or a “Norwegian model” in our relationship with the remaining EU. We will have a “British model”, negotiated to take account of our economic and population strength. We can’t say for certain what this will look like, but we can be sure it won’t look like anything which currently exists.

Our only goal will be the western shore

On the other hand, leaving the EU will not solve our immigration “problem”. Quite apart from the fact that it is far less of a problem than many people believe – there is absolutely zero evidence that immigrants are squeezing local-born people out of the employment market, for example – the reality is that EU migration is still lower than non-EU migration.

Given that many EU migrants would, if we were not part of the EU, fall into the same categories as the non-EU migrants allowed to come here and would therefore continue to be allowed to come in the future, the idea that we could make a sizeable dent in immigration by leaving is laughable. And that’s assuming we won’t negotiate an agreement with the EU which includes free movement of labour anyway. I strongly suspect we would, because overall, it would be beneficial to us to do so.

The face doesn’t fit

If we disregard the guff from both sides (and there’s an awful lot of it to disregard), though, what are we left with? Can we, as some suggest, make our decision based on the identities of those arguing for either option?

The answer to that is “no”. I’ve previously argued that, when it comes to a general election, you can’t just vote for policies – you have to take account of the perceived competence of those who will implement them as well. But a referendum is the complete opposite. No matter what we decide, we will still have the same government the day after the referendum as before, and we will still have the same options at the next general election.

Voting Remain because you dislike Nigel Farage, or Leave in order to snub David Cameron – or vice versa – is the worst possible reason for making your decision. Voting Leave will not put Ukip in power. Or Jeremy Corbyn, for that matter. Voting Remain is not an endorsement of the current government. The decision we collectively make on 23rd June 2016 will have an effect long after all of us currently active in politics have retired or died. Casting your vote now on the basis of which set of faces you like the most is one of the most mind-numbingly stupid things you could do. So don’t do it.

Making either choice is a decision based on what you, after giving the matter careful consideration, honestly believe is best for the UK in the long run. At least, I hope it is. And that means cutting through the dodgy headlines and looking beyond the faces to try and find the facts.

I’m not, in the space of this article, going to try and give you the facts, beyond those I’ve obliquely referred to already. But I am going to make a few observations.

Break point

After my comment on the behaviour of the campaigners, the second most important thing which must be said is that the EU is badly broken in a number of areas. That fact is, I think, beyond dispute. A full list would be far too long, but the economic sacrifice of Greece on the altar of the Euro and the mismanagement of the migrant crisis are two obvious examples. The question is not “Is there anything wrong with the EU?”. The question is “Can we fix it?”.

In this context, I disagree with the criticism of Jeremy Corbyn for seemingly being lukewarm over the EU, or with Boris Johnson for dithering before coming out for Leave. In both cases, these are the actions you would expect of someone who recognises that there are strong arguments for both options, but that ultimately you have to make a choice between them. I’m not telling you which of Johnson or Corbyn you should vote with, but both of them make a better role model here than those who adopted a knee-jerk position for either Remain or Leave right from the off.

Between the devil and the deep blue sea

The reality is that there are some very good arguments on both sides. Anyone who doesn’t recognise that simply hasn’t thought about the issue in any great detail. Equally, there are some very bad arguments on both sides. And the tragedy is that the campaign has seemingly focused on the bad arguments rather than the good ones.

The EU is, as I’ve said, badly broken in many respects, and if it continues down some of those broken pathways it has the potential to do a great deal of harm. But it has also been extremely beneficial in very many ways, and the UK has gained a lot from our membership. Again, I will say that anyone who does not recognise the truth of both these statements has too little understanding to make an informed choice.

Questions, questions

Ultimately, everyone’s decision has to be their own. I’m not telling anyone how to vote, or how I intend to vote. But I will pose a set of questions that will inform my own choice. Hopefully, they will be helpful to others as well. Those questions are:

  1. Are the EU’s structural flaws beyond repair, or can they be fixed?
  2. In the long term – not just the next few years, but for the next generation – which option offers the best prospects for our economic security and freedom?
  3. Is a decision to leave influenced by the “grass is always greener” fallacy?
  4. Conversely, does a decision to stay reflect the sunk costs fallacy?
  5. Which decision will I be most proud of explaining to my children, and why?

I’ll leave it to you to answer those questions for yourselves, or to pose others. I may come back after the vote and explain how I answered them.

by Mark at June 15, 2016 10:56 AM

June 05, 2016

Liam Proven

Did the floppy disk, & diskette drives, die before their time?

I almost never saw 2.8MB floppy drives.

I know they were out there. The later IBM PS/2 machines used them, and so did some Unix workstations, but the 2.8MB format -- quad or extended density -- never really took off.

It did seem to me that if the floppy companies & PC makers had actually adopted them wholesale, the floppy disk as a medium might have survived for considerably longer.

The 2.8MB drives never really took off widely, so the media remained expensive, ISTM -- and thus little software was distributed on the format, because few machines could read it.

By 1990 there was an obscure and short-lived 20MB floptical diskette format:

http://www.cbronline.com/news/insites_20mb_floptical_drive_reads_144mb_disks

Then in 1994 came 100MB Zip disks, which for a while were a significant format -- I had Macs with built-in-as-standard Zip drives.

Then the 3½" super floptical drives, the Imation SuperDisk in 1997, 144MB Caleb UHD144 in early 1998 and then 150MB Sony HiFD in late 1998.

(None of these later drives could read 2.8MB diskettes, AFAIK.)

After that, writable CDs got cheap enough to catch on, and USB Flash media mostly has killed them off now.

If the 2.8 had taken off, and maybe even intermediate ~6MB and ~12MB formats -- was that feasible? -- before the 20MB ones, well, with widespread adoption, there wouldn't have been an opening for the Zip drive, and the floppy drive might have remained a significant and important medium for another decade.

I didn't realise that the Zip drive eventually got a 750MB version, presumably competing with Iomega's own 1GB Jaz drive. If floppy drives had got into that territory, could they have even fended off CDs? Rewritable CDs always were a pain. They were a one-shot medium and thus inconvenient and expensive -- write on one machine, use a few times at best, then throw away.

I liked floppies. I enjoy playing with my ancient Sinclair computers, but loading from tape cassette is just a step too far. I remember the speed and convenience when I got my first Spectrum disk drive, and I miss it. Instant loading from an SD drive just isn't the same. I don't use them on PCs any more -- I don't have a machine with a floppy drive in this country -- but for 8-bits, two drives with a meg or so of storage was plenty. I used them long after most people, if only for updating BIOSes and so on.

June 05, 2016 01:05 PM

The rise & fall of the first real x86 rival to Intel: the Cyrix 6x86

I was surprised to read someone castigating and condemning the Cyrix line of PC CPUs today.

For a while, I recommended 'em and used 'em myself. My own home PC was a Cyrix 6x86 P166+ for a year or two. Lovely machine -- a 133MHz processor that performed about 30-40% better than an Intel Pentium MMX at the same clock speed.

My then-employer, PC Pro magazine, recommended them too.

I only ever hit one problem: I had to turn down reviewing the latest version of Aldus PageMaker because it wouldn't run on a 6x86. I replaced it with a Baby-AT Slot A Gigabyte motherboard and a Pentium II 450. (Only the 100MHz front side bus Pentium IIs were worth bothering with IMHO. The 66MHz FSB PIIs could be outperformed by a cheaper SuperSocket 7 machine with a Cyrix chip.) It was very difficult to find a Baby-AT motherboard for a PII -- the market had switched to ATX by then -- but it allowed me to keep a case I particularly liked, and indeed, most of the components in that case, too.

The one single product that killed the Cyrix chips was id Software's Quake.

Quake used very cleverly optimised x86 code that interleaved FPU and integer instructions, as John Carmack had worked out that apart from instruction loading, which used the same registers, FPU and integer operations used different parts of the Pentium core and could effectively be overlapped. This nearly doubled the speed of FPU-intensive parts of the game's code.

The interleaving didn't work on Cyrix cores. It ran fine, but the operations did not overlap, so execution speed halved.

On every other benchmark and performance test we could devise, the 6x86 core was about 30-40% faster than the Intel Pentium core -- or the Pentium MMX, as nothing much used the extra instructions, so really only the additional L1 cache helped. (The Pentium 1 had 16 kB of L1; the Pentium MMX had 32 kB.)

But Quake was extremely popular, and everyone used it in their performance tests -- and thus hammered the Cyrix chips, even though the Cyrix was faster in ordinary use, in business/work/Windows operation, indeed in every other game except Quake.

And ultimately that killed Cyrix off. Shame, because the company had made some real improvements to the x86-32 design. Improving instructions-per-clock is more important than improving the raw clock speed, which was Intel's focus right up until the demise of the Netburst Pentium 4 line.

AMD with the 64-bit Sledgehammer core (Athlon 64 & Opteron) did the same to the P4 as Cyrix's 6x86 did to the Pentium 1. Indeed I have a vague memory some former Cyrix processor designers were involved.

Intel Israel came back with the (Pentium Pro-based) Pentium M line, intended for notebooks, and that led to the Core series, with IPC speeds that ultimately beat even AMD's. Today, nobody can touch Intel's high-end x86 CPUs. AMD is looking increasingly doomed, at least in that space. Sadly, though, Intel has surrendered the low end and is killing the Atom line.

http://www.pcworld.com/article/3063672/windows/the-death-of-intels-atom-casts-a-dark-shadow-over-the-rumored-surface-phone.html

The Atoms were always a bit gutless, but they were cheap, ran cool, and were frugal with power. In recent years they've enabled some interesting cheap low-end Windows 8 and Windows 10 tablets:

http://www.anandtech.com/show/8760/hp-stream-7-review

https://www.amazon.co.uk/Windows10-Tablet-Display-11000mAh-Battery-F-Black-B-Gray/dp/B01DF3UV3Y?ie=UTF8&keywords=hi12&qid=1460578088&ref_=sr_1_2&sr=8-2

Given that there is Android for x86, and have already been Intel-powered Android phones, plus Windows 10 for phones today, this opened up the intriguing possibility of x86 Windows smartphones -- but then Intel slammed the door shut.

Cyrix still exists, but only as a brand for Via, with some very low-end x86 chips. Interestingly, these don't use Cyrix CPU cores -- they use a design taken from a different non-Intel x86 vendor, the IDT WinChip:

https://en.wikipedia.org/wiki/WinChip

I installed a few WinChips as upgrades for low-speed Pentium PCs. The WinChip never was all that fast, but it was a very simple, stripped-down core, so it ran cool, was about as quick as a real Pentium core, but was cheaper and ran at higher clock speeds, so they were mainly sold as an aftermarket upgrade for tired old PCs. The Cyrix chips weren't a good fit for this, as they required different clock speeds, BIOS support, additional cooling and so on. IDT spotted a niche and exploited it, and oddly, that is the non-Intel x86 core that's survived at the low-end, and not the superior 6x86 one.

In the unlikely event that Via does some R&D work, it could potentially move into the space now vacated by the very low-power Atom chips. AMD is already strong in the low-end x86 desktop/notebook space with its Fusion processors which combine a 64-bit x86 core with an ATI-derived GPU, but they are too big, too hot-running and too power-hungry for smartphones or tablets.

June 05, 2016 12:34 PM

June 02, 2016

Steve Kennedy

Garmin Dashcam 35

The Garmin Dashcam 35 is a little camera that can be mounted on your card dashboard and then as soon as power is applied it will start recording. It's size is 9.43 cm x 4.85 cm x 3.89 cm and it has a 3" TFT LCD display and weighs about 113g.

The supplied mount need to be pushed into the socket on the front of the camera (it takes a fair bit of effort to snap it in) and the mount then attaches to the windscreen with a sticky pad. Unfortunately that means the mount is pretty well permanently attached to the the windscreen as it's pretty strong glue, it's a shame there aren't other types of mount that say attach to a heating grill or some other part of the dashboard that can easily be removed. A bodged mount can work by attaching some sticky tape to the mount and the bottom of the camera and it will just about sit on the dashboard with an unobstructed view ahead.

The system records HD video (1080p or 720p) and will stamp the video with GPS coordinates and the system records in a continuous loop i.e. if it runs out of space it will overwrite older video. The system should come with a 4GB microSD card (though the unit supplied didn't have one) which is enough for about an hour of video. It supports up to 64GB cards.

It also has a microphone which records what's happening in the car!!! There's also an accelerometer which will detect a collision (and start recording if it isn't and log GPS coordinates) which is detected as an "event". However when it's recording an event can be pushing the mount into it or it falling off the dashboard on to the car floor.

There's no software in the box, though Garmin's Dash Cam Player is available for download (for Mac and Windows) through their site. This will show the video and the GPS route next to it (when run it will look for an attached camera or videos on the SD card and import them on to the PC/Mac). The actual video files are MP4 so can be viewed in pretty much any video player. The player shows the route taken, speed and time (there is a pointer on the route that moves as you play the video).

A quite nice feature is that it's possible to select Bing (default)/ Baidu or OpenStreetMaps for the map display. It's also possible to convert any unsaved videos to saved (on the PC/Mac), export GPS positions to a GPX file and take a screen shot.

It's also possible to by a Cyclops subscription from the Garmin store (for various countries including UK/Europe) which will alert the user to speed cameras etc.

Apart from the niggling permanent windscreen mount, it's a nice little unit. It retails for £159 from Garmin, but can be had on-line for at least £20 cheaper.

by Steve Karmeinsky (noreply@blogger.com) at June 02, 2016 05:06 PM

May 24, 2016

Alex Bloor

When Junk Callers get fiesty… (and don’t know the law)

(Update at end 16:32 and 23:08) I had a message passed to me, to call someone back; a Michael Kennedy from “PSI Group”. The name of the company they said they were from seemed familiar, so I did call them … Continue reading

by Alex Bloor at May 24, 2016 01:53 PM

May 20, 2016

Alex Bloor

Part 5: Postmortem of a Kickstarter campaign; Camsformer

Part Five – The best stuff always happens after I leave the party 🙁 Part Four ended with me being given a refund and effectively ending my relationship with the project. Despite no longer having a chance of getting the … Continue reading

by Alex Bloor at May 20, 2016 05:12 PM

May 19, 2016

Alex Bloor

Part 4: Postmortem of a Kickstarter campaign; Camsformer

Part Four…The purchase of my silence (spoiler alert) We left the story at part three on September 6th 2015, at which point I’d lost my shit for the first time, pointed out a whole lot of oddities and concerns and … Continue reading

by Alex Bloor at May 19, 2016 08:00 PM

May 18, 2016

Jonathan McDowell

First steps with the ATtiny45

1 port USB Relay

These days the phrase “embedded” usually means no console (except, if you’re lucky, console on a UART for debugging) and probably busybox for as much of userspace as you can get away with. You possibly have package management from OpenEmbedded or similar, though it might just be a horrible kludged together rootfs if someone hates you. Either way it’s rare for it not to involve some sort of hardware and OS much more advanced than the 8 bit machines I started out programming on.

That is, unless you’re playing with Arduinos or other similar hardware. I’m currently waiting on some ESP8266 dev boards to arrive, but even they’re quite advanced, with wifi and a basic OS framework provided. A long time ago I meant to get around to playing with PICs but never managed to do so. What I realised recently was that I have a ready made USB relay board that is powered by an ATtiny45. First step was to figure out if there were suitable programming pins available, which turned out to be all brought out conveniently to the edge of the board. Next I got out my trusty Bus Pirate, installed avrdude and lo and behold:

$ avrdude -p attiny45 -c buspirate -P /dev/ttyUSB0
Attempting to initiate BusPirate binary mode...
avrdude: Paged flash write enabled.
avrdude: AVR device initialized and ready to accept instructions

Reading | ################################################## | 100% 0.01s

avrdude: Device signature = 0x1e9206 (probably t45)

avrdude: safemode: Fuses OK (E:FF, H:DD, L:E1)

avrdude done.  Thank you.

Perfect. I then read the existing flash image off the device, disassembled it, worked out it was based on V-USB and then proceeded to work out that the only interesting extra bit was that the relay was hanging off pin 3 on IO port B. Which led to me knocking up what I thought should be a functionally equivalent version of the firmware, available locally or on GitHub. It’s worked with my basic testing so far and has confirmed to me I understand how the board is set up, meaning I can start to think about what else I could do with it…

May 18, 2016 09:25 PM

May 16, 2016

Jess Rowbottom

Love Love Peace Peace: Adventures In Stockholm

Those of you who are regular readers will know of my love of all things Eurovision: the song contest which, despite being about us all getting on and loving each other is quite the global battleground. I threw Eurovision parties alongside my ex for around ten years, then in more recent times hosted small intimate get-togethers while friends watched the show. Last year I said I wouldn’t host another one, so throwing caution to the wind I booked myself a ticket to Stockholm for the Eurovision Song Contest 2016.

Starting on Friday…

Landing at Arlanda airport I hopped a reassuringly uneventful shuttle train to my hotel at Hammarby Sjöstad, stereotypically furnished like an Ikea showroom. Half an hour of freshening up, and off exploring.

First destination: The Eurovision Village, a small arena with two stages in the centre of Stockholm near the Royal Palace. Sponsors plied their wares with Eurovision gimmicks: you could karaoke on a little stage, dress up as an act, have yourself recorded or photographed and publicised. On the stages previous Eurovision acts sang their songs and had a stab at current entries with everyone joining in.

Absolutely everything was Eurovision with banners all over the place. Round the corner sat a huge countdown clock with a queue of camera-wielding public waiting for their selfies. Printed signs in shop windows saying “Eurovision Special! Handbags, 349 SEK!”. Pedestrian crossings played Loreen’s Euphoria while you waited and Måns Zelmerlow’s Heroes when it was time to walk. Rainbow postboxes. Anything which could be linked to Eurovision was – but bizarrely very little actual merchandise save for a small stand with a huge queue in the Village itself. On one side the ABBA Museum had a booth and given I had nothing to do that day other than explore, I booked myself a ticket and hopped a tram.

Eurovision overload in the Museum

Eurovision overload in the Museum

Pop House is a little museum complex housing a couple of exhibitions alongside the ABBA Museum itself, in particular a temporary Eurovision Song Contest Museum. With exhibits such as Conchita Wurst’s Rise Like A Phoenix dress from the Eurovision Final 2014 and ABBA’s award medal from Brighton in 1974 there were certainly lots of things to look at. Through into the ABBA Museum itself it seemed to have it all – from the tour vans which Björn and Benny started their careers with, to a telephone which if it rang was apparently a member of ABBA themselves phoning the museum for a chat, exact reproductions of their studios and management office, vocal booths, and a stage where you could perform as the fifth member of ABBA. Gold discs abounded alongside artefacts from the big tours. Finally, there was an exhibition of Swedish pop history with “stuff” from Roxette, Ace Of Base, Swedish House Mafia and all those sorts of artists. Very enjoyable.

I figured I really should eat something and as I’d been awake since 4am travelling, going back to the hotel restaurant seemed a good idea. Reindeer for tea – a very lean meat, wonderfully presented, with a glass of Shiraz for company. I was about to head to bed when I got a message from my friends James and Simon – did I fancy going to a party? Oh goodness. I booked a ticket, freshened up, and got a cab out to one of the hotels.

The rest of the evening involved dancing, a rather splendidly-presented rhubarb cocktail with a price which would put the GDP of a small country to shame, more dancing, ridiculously expensive G&Ts which were 98% gin with the tonic waved from a distance, watching a couple of past Eurovision acts play live, and a nice chat with Charlotte Perelli who sang Hero for Switzerland in 2008. And that was just Friday!

Saturday’s Daytime Amusement

Saturday brought an opportunity for exploration, so off up to Gamla Stan (Stockholm’s Old Town) and shopping. It’s quite a nice area with lots of little shops to poke around in: plenty of tourist-trap places selling Swedish-themed tat but a few rubies in the dust, such as a nice little jewellery shop with a shopkeeper who was only too happy to discuss who might win. She’d been to the jury final on Friday night and I had to repeatedly ask that she not spoil it for me, so I figured I was in for a treat later.

A happy discovery: Sweden has more tall women, which means the shops are better stocked for us lanky birds and at a reasonable cost too, and everyone’s so friendly – even the lady in H&M taught me some Swedish when I apologetically mumbled I didn’t understand what she’d asked of me. I started with “hej” (hello) and “tack” (thanks), and the basic number system. On the topic of language the pronunciation of “j” as “i” confused me when I got in an Über cab where they already know your name, and would say “yes?” and I’d reply “er, yes!”, followed by “no, no – your name is yes?”… “ohhh, Jess, it’s pronounced with a J, I do apologise.” (Apologies by default, how British…)

Lunch was meatballs. Well, you have to, don’t you… none of this Ikea stuff here! More poking around in shops, and back to the hotel to get ready.

The Grand Final

Off to Eurovision, brb!I wasn’t intending on overdoing the patriotic outfit and in the end I think I scrubbed up pretty well: a red frock, Union flag scarf, flag earrings and red heels – possibly even subconsciously channelling Scooch. I was feeling a little self-conscious as I got the tram but any fears evaporated away once I saw other Brits. Despite being on my own for a substantial part of my visit, I always could find someone to chat Eurovision with!

From the moment I arrived at Globen it was insane. People dressed up, dancing, singing. I joined a small group of Brits outside being interviewed for a French TV station, singing Joe and Jake’s entry at the top of our voices (apparently it’s been on telly though I’ve yet to see it). Unfeasible movement-limiting outfits from years gone by. A guy in a flesh-coloured skin suit (so he looked nude) with a wolf protecting his modesty. Serbian human disco balls. Spanish milk-maids. Companies giving out leaflets and free samples everywhere. Promoters touting unofficial afterparties. Journalists. Cameras. All utterly bonkers.

My ticket had me in the Tele2 Arena next door to the final itself, for the “Eurovision Party”. Security was a quick pat-down and in I went. I found another group of Brits (look for the flag – these lovely folks from Liverpool had bunting!) and hooray, we had beer too! In our Arena a few acts were performing before the Contest itself began, most notable of which was 2012 winner Loreen singing Euphoria live: this was my coming-out song a few years ago and it might seem soppy but the music still takes me back to that emotional rollercoaster.

We had a short informational note telling us when the broadcast would come live from our bit, and then the big screen went live to the stage next door. Not just the stage though, we had little screens showing us the Eurovision control room, the backstage area (fascinating watching  the acts prepare and all the movement that goes on around staging and scenery) and the artists’ green room. A countdown to going live across the world… and… Good Evening Europe!

That's Mans Zelmerlow up there on stage...

That’s Mans Zelmerlow up there on stage…

I’m not ashamed to say I danced. Belgium opening the Contest was excellent as it got us “in the mood”, that Little-Boots-Meets-Uptown-Funk thing. I’ve never danced while the competition was actually airing and it was so liberating, especially with others dancing around and the occasional witty remark between countrymen. As Poland took to the stage I decided I needed another beer – bad timing, I heard an announcement we were going live imminently!

Simon and I quickly elbowed our way forward, and suddenly we were live with Måns Zelmerlow and two previous entrants! Unashamedly, I waved – and if you watch in HD you can see me and my unfeasibly long arm. So there, I was being a numpty on the official Eurovision coverage – that’s one off the bucket-list.

The performances went on and by the time we reached the UK’s entry – 25th down the line – I’d been joined by Jon who runs The Thoroughly Good Blog. We stood, nervous, tense, draped in my Union Flag scarf, and watched Joe and Jake storm it. The performance was flawless, the song a decent entry. We raised a glass.

I really very much enjoyed Love Love Peace Peace, the interval musical number performed by Petra Mede and Måns Zelmerlow alongside Eurovision stars such as Alexander Rybak and Lordi. I do hope they release it as a single, I’d certainly buy it but given the only copies of the 2013 song Swedish Smorgasbord are bootlegs I’m not holding out much hope.

Standing near the front I was grabbed by two women who said “You have to come this way! Come over here! We need a photo!” – it turns out I was as tall as their friend Agneta, and this was new. I posed with her for a photo and we danced for a while, chatting away and realising we were both the same sizes. Nice woman, maybe we’ll meet again in a couple of years and dance again!

Scoring

Me and Agneta, tall girls together.

Me and Agneta, tall girls together.

I’m not a fan of the new scoring system, which separately takes the jury votes and the televotes to the net effect of jumbling the scoreboard again halfway through proceedings (still, I bet it made the drinking game in Wrenthorpe somewhat more hectic).

We were crossing our fingers it wouldn’t be Russia and by the end of the jury voting it did indeed seem we had a winner in Australia; apparently over in the Press Centre someone was already booking hotels and flights for Berlin, strongly rumoured to be the venue.

The televotes came in and… well, Ukraine got it didn’t they. I don’t think we were all massively surprised, but certainly not elated as we should have been. I’d been talking to other fans and we were all glumly resigned to Russia winning although nobody seemed to want it, so surely Ukraine’s good, right? Maybe not. Their LGBT rights abuse record and violence is well documented and I’m not sure I’d want to go somewhere my girlfriend and I don’t feel safe.

Put simply, UK entry Joe & Jake were shortchanged: we had a very good song, and I don’t consider it could have been executed better. Maybe the problem is all in the promotion now, but when it comes down to it for all that we laughed along with Terry Wogan, the casual xenophobia did a lot of damage and will take a long time to blow over.

In the aftermath, I ventured to the arena lounge and stopped with new friends until being chucked out at 4am – slightly cheaper gin, I only had to hawk the one kidney. As the sun came up I hopped a cab back to the hotel and collapsed into bed: ouchie sore feet from wearing heels for 8 hours, a slight smell of stale beer spilled onto me by other partygoers, and a total buzz from the night.

Aftermath

Home with a mug of tea, the Official Programme and lots of lovely memories.

Home with a mug of tea, the Official Programme and lots of lovely memories.

Sunday brought more wandering around Stockholm, a trip back to Gamla Stan, and through the remains of the Eurovision Village; it was nice to see someone else clearing up after a party for a change although it amplified my comedown somewhat. Making it back to the hotel to collect my bag and thus to Arlanda, I saw various Brits on the way and we chatted about the result, what it meant and where we might have gone wrong. Everyone had an opinion, but none of them pointed to our entry itself. Still, there’s always next year isn’t there.

(A little surreal to see the Israeli entry at the airport queueing for the security check next to me, as well.)

I’m saddened it came down to a protest vote – at least on the surface of it. This isn’t what Eurovision is about, and for all the bitching folks do about the politics of voting for neighbours it’s not affected it that much until this year. Russia are full of sour grapes, some press are talking about their country being the laughing-stock of Europe, and most have an analysis of the impact on 2017’s event (Guardian link here).

Would I go to the Eurovision Grand Final again? Absolutely. It’s like an addiction, and I will definitely be up for it especially when it’s in Europe itself. I’ve met some lovely people from wandering around the town, in the Arena, even on the flight home. However I’m not sure that stretches to the risk of personal safety and being beaten up (or worse) in Ukraine – plus it’s a bit far to go for a weekend…

…I might go back to submitting a song though. Hmm.

 

Header photo credit: Jason Foley-Doherty.

by Jess at May 16, 2016 03:52 PM

May 13, 2016

Liam Proven

The decline & fall of DEC - & MS

(Repurposed email reply)

Although I was educated & worked with DEC systems, I didn't have much to do with the company itself. Its support was good, the kit ludicrously expensive, and the software offerings expensive, slow and lacking competitive features. However, they also scored in some ways.

My 60,000' view:

Microsoft knew EXACTLY what it was doing with its practices when it built up its monopoly. It got lucky with the technology: its planned future super products flopped, but it turned on a dime & used what worked.

But killing its rivals, any potential rival? Entirely intentional.

The thing is that no other company was poised to effectively counter the MS strategy. Nobody.

MS' almost-entirely-software-only model was almost unique. Its ecosystem of apps and 3rd party support was unique.

In the end, it actually did us good. Gates wanted a computer on every desk. We got that.

The company's strategy called for open compatible generic hardware. We got that.

Only one platform, one OS, was big enough, diverse enough, to compete: Unix.

But commercial, closed, proprietary Unix couldn't. 2 ingredients were needed:

#1 COTS hardware - which MS fostered;
#2 FOSS software.

Your point about companies sharing their source is noble, but I think inadequate. The only thing that could compete with a monolithic software monopolist on open hardware was open software.

MS created the conditions for its own doom.

Apple cleverly leveraged FOSS Unix and COTS X86 hardware to take the Mac brand and platform forward.

Nobody else did, and they all died as a result.

If Commodore, Atari and Acorn had adopted similar strategies (as happened independently of them later, after their death, resulting in AROS, AFROS & RISC OS Open), they might have lived.

I can't see it fitting the DEC model, but I don't know enough. Yes, cheap low-end PDP-11s with FOSS OSes might have kept them going longer, but not saved them.

The deal with Compaq was catastrophic. Compaq was in Microsoft's pocket. I suspect that Intel leant on Microsoft and Microsoft then leant on Compaq to axe Alpha, and Compaq obliged. It also knifed HP OpenMail, possibly the Unix world's only viable rival to Microsoft Exchange.

After that it was all over bar the shouting.

Microsoft could not have made a success of OS/2 3 without Dave Cutler... But DEC couldn't have made a success out of PRISM either, I suspect. Maybe a stronger DEC would have meant Windows NT would never have happened.

May 13, 2016 11:51 AM

May 06, 2016

Andy Smith (strugglers.net)

Using a TOTP app for multi-factor SSH auth

I’ve been playing around with enabling multi-factor authentication (MFA) on web services and went with TOTP. It’s pretty simple to implement in Perl, and there are plenty of apps for it including Google Authenticator, 1Password and others.

I also wanted to use the same multi-factor auth for SSH logins. Happily, from Debian jessie onwards libpam-google-authenticator is packaged. To enable it for SSH you would just add the following:

auth required pam_google_authenticator.so

to /etc/pam.d/sshd (put it just after @include common-auth).

and ensure that:

ChallengeResponseAuthentication yes

is in /etc/ssh/sshd_config.

Not all my users will have MFA enabled though, so to skip prompting for these I use:

auth required pam_google_authenticator.so nullok

Finally, I only wanted users in a particular Unix group to be prompted for an MFA token so (assuming that group was totp) that would be:

auth [success=1 default=ignore] pam_succeed_if.so quiet user notingroup totp
auth required pam_google_authenticator.so nullok

If the pam_succeed_if conditions are met then the next line is skipped, so that causes pam_google_authenticator to be skipped for users not in the group totp.

Each user will require a TOTP secret key generating and storing. If you’re only setting this up for SSH then you can use the google-authenticator binary from the libpam-google-authenticator package. This asks you some simple questions and then populates the file $HOME/.google_authenticator with the key and some configuration options. That looks like:

T6Z2KSDCG7CEWPD6EPA6BICBFD4KYKCSGO2JEQVII7ZJNCXECRZPJ4GJHD3CWC43FZIKQUSV5LR2LFFP
" RATE_LIMIT 3 30 1462548404
" DISALLOW_REUSE 48751610
" TOTP_AUTH
11494760
25488108
33980423
43620625
84061586

The first line is the secret key; the five numbers are emergency codes that will always work (once each) if locked out.

If generating keys elsewhere then you can just populate this file yourself. If the file isn’t present then that’s when “nullok” applies; without “nullok” authentication would fail.

Note that despite the repeated mentions of “google” here, this is not a Google-specific service and no data is sent to Google. Google are the authors of the open source Google Authenticator mobile app and the libpam-google-authenticator PAM module, but (as evidenced by the Perl example) this is an open standard and client and server sides can be implemented in any language.

So that is how you can make a web service and an SSH service use the same TOTP multi-factor authentication.

by Andy at May 06, 2016 04:34 PM

April 28, 2016

Steve Kennedy

Misfit Ray, it might actually be the first wearable that actually looks like jewellery

A while back, Misfit released the Ray. Basically a tube with a single LED and straps coming out either side (initially only silicon, which are a bit ugly, but now leather straps are available, which look much nicer - though obviously not made for sports/water). The tube is made from aluminium and comes in Carbon Black or Rose Gold.

Apart from the lack of LED's, the Ray has pretty much the same functionality as the Shine2 and measures steps, activities and sport and works with Misfit Link to trigger actions (and can link to IFTT to trigger pretty much anything).

Progress is tracked by the LED flashing different colours (under 25%, 25%+. 50%+, 75%+ and 100%+ i.e. goal met) and it flashes blue when syncing with the Misfit app over Bluetooth (it supports Bluetooth version 4.1). It will;l also indicate incoming calls, incoming texts and wake-up alarm.

The Shine2 uses a single CR2032 battery while the Ray now uses 3 x 393 button cells (which should also give 6 months usage).

The Ray is also 50m water resistant so can be used for swimming.

Misfit are promising a range of new straps and other accessories so it can be worn, say, as a pendant.

The sport band version retails for £72.87 and the leather for £87.45, not the cheapest units out there, but probably (at least for now) the prettiest.

by Steve Karmeinsky (noreply@blogger.com) at April 28, 2016 06:57 PM

April 27, 2016

Liam Proven

Where did we all go wrong? And why doesn't anyone remember? [Tech blog post]

My contention is that a large part of the reason that we have the crappy computers that we do today -- lowest-common-denominator boxes, mostly powered by one of the kludgiest and most inelegant CPU architectures of the last 40 years -- is not technical, nor even primarily commercial or due to business pressures, but rather, it's cultural.

When I was playing with home micros (mainly Sinclair and Amstrad; the American stuff was just too expensive for Brits in the early-to-mid 1980s), the culture was that Real Men programmed in assembler and the main battle was Z80 versus 6502, with a few weirdos saying that 6809 was better than either. BASIC was the language for beginners, and a few weirdos maintained that Forth was better.

At university, I used a VAXcluster and learned to program in Fortran-77. The labs had Acorn BBC Micros in -- solid machines, the best 8-bit BASIC ever, and they could interface both with lab equipment over IEEE-488 and with generic printers and so on over Centronics parallel and its RS-423 interface [EDIT: fixed!], which could talk to RS-232 kit.

As I discovered when I moved into the professional field a few years later (1988), this wasn't that different from the pro stuff. A lot of apps were written in various BASICs, and in the old era of proprietary OSes on proprietary kit, for performance, you used assembler.

But a new wave was coming. MS-DOS was already huge and the Mac was growing strongly. Windows was on v2 and was a toy, but Unix was coming to mainstream kit, or at least affordable kit. You could run Unix on PCs (e.g. SCO Xenix), on Macs (A/UX), and my employers had a demo IBM RT-6150 running AIX 1.

Unix wasn't only the domain (pun intentional) of expensive kit priced in the tens of thousands.

A new belief started to spread: that if you used C, you could get near-assembler performance without the pain, and the code could be ported between machines. DOS and Mac apps started to be written (or rewritten) in C, and some were even ported to Xenix. In my world, nobody used stuff like A/UX or AIX, and Xenix was specialised. I was aware of Coherent as the only "affordable" Unix, but I never saw a copy or saw it running.

So this second culture of C code running on non-Unix OSes appeared. Then the OSes started to scramble to catch up with Unix -- first OS/2, then Windows 3, then the for a decade parallel universe of Windows NT, until XP became established and Win9x finally died. Meanwhile, Apple and IBM flailed around, until IBM surrendered, Apple merged with NeXT and switched to NeXTstep.

Now, Windows is evolving to be more and more Unix-like, with GUI-less versions, clean(ish) separation between GUI and console apps, a new rich programmable shell, and so on.

While the Mac is now a Unix box, albeit a weird one.

Commercial Unix continues to wither away. OpenVMS might make a modest comeback. IBM mainframes seem to be thriving; every other kind of big iron is now emulated on x86 kit, as far as I can tell. IBM has successfully killed off several efforts to do this for z Series.

So now, it's Unix except for the single remaining mainstream proprietary system: Windows. Unix today means Linux, while the weirdoes use FreeBSD. Everything else seems to be more or less a rounding error.

C always was like carrying water in a sieve, so now, we have multiple C derivatives, trying to patch the holes. C++ has grown up but it's like Ada now: so huge that nobody understands it all, but actually, a fairly usable tool.

There's the kinda-sorta FOSS "safe C++ in a VM", Java. The proprietary kinda-sorta "safe C++ in a VM", C#. There's the not-remotely-safe kinda-sorta C in a web browser, Javascript.

And dozens of others, of course.

Even the safer ones run on a basis of C -- so the lovely cuddly friendly Python, that everyone loves, has weird C printing semantics to mess up the heads of beginners.

Perl has abandoned its base, planned to move onto a VM, then the VM went wrong, and now has a new VM and to general amazement and lack of interest, Perl 6 is finally here.

All the others are still implemented in C, mostly on a Unix base, like Ruby, or on a JVM base, like Clojure and Scala.

So they still have C like holes and there are frequent patches and updates to try to make them able to retain some water for a short time, while the "cyber criminals" make hundreds of millions.

Anything else is "uncommercial" or "not viable for real world use".

Borland totally dropped the ball and lost a nice little earner in Delphi, but it continues as Free Pascal and so on.

Apple goes its own way, but has forgotten the truly innovative projects it had pre-NeXT, such as Dylan.

There were real projects that were actually used for real work, like Oberon the OS, written in Oberon the language. Real pioneering work in UIs, such as Jef Raskin's machines, the original Mac and Canon Cat -- forgotten. People rhapsodise over the Amiga and forget that the planned OS, CAOS, to be as radical as the hardware, never made it out of the lab. Same, on a smaller scale, with the Acorn Archimedes.

Despite that, of course, Lisp never went away. People still use it, but they keep their heads down and get on with it.

Much the same applies to Smalltalk. Still there, still in use, still making real money and doing real work, but forgotten all the same.

The Lisp Machines and Smalltalk boxes lost the workstation war. Unix won, and as history is written by the victors, now the alternatives are forgotten or dismissed as weird kooky toys of no serious merit.

The senior Apple people didn't understand the essence of what they saw at PARC: they only saw the chrome. They copied the chrome, not the essence, and now all that any of us have is the chrome. We have GUIs, but on top of the nasty kludgy hacks of C and the like. A late-'60s skunkware project now runs the world, and the real serious research efforts to make something better, both before and after, are forgotten historical footnotes.

Modern computers are a vast disappointment to me. We have no thinking machines. The Fifth Generation, Lisp, all that -- gone.

What did we get instead?

Like dinosaurs, the expensive high-end machines of the '70s and '80s didn't evolve into their successors. They were just replaced. First little cheapo 8-bits, not real or serious at all, although they were cheap and people did serious stuff with them because it's all they could afford. The early 8-bits ran semi-serious OSes such as CP/M, but when their descendants sold a thousand times more, those descendants weren't running descendants of that OS -- no, it and its creator died.

CP/M evolved into a multiuser multitasking 386 OS that could run multiple MS-DOS apps on terminals, but it died.

No, then the cheapo 8-bits thrived in the form of an 8/16-bit hybrid, the 8086 and 8088, and a cheapo knock-off of CP/M.

This got a redesign into something grown-up: OS/2.

Predictably, that died.

So the hacked-together GUI for DOS got re-invigorated with an injection of OS/2 code, as Windows 3. That took over the world.

The rivals - the Amiga, ST, etc? 680x0 chips, lots of flat memory, whizzy graphics and sound? All dead.

Then Windows got re-invented with some OS/2 3 ideas and code, and some from VMS, and we got Windows NT.

But the marketing men got to it and ruined its security and elegance, to produce the lipstick-and-high-heels Windows XP. That version, insecure and flakey with its terrible bodged-in browser, that, of course, was the one that sold.

Linux got nowhere until it copied the XP model. The days of small programs, everything's a text file, etc. -- all forgotten. Nope, lumbering GUI apps, CORBA and RPC and other weird plumbing, huge complex systems, but it looks and works kinda like Windows and a Mac now so it looks like them and people use it.

Android looks kinda like iOS and people use it in their billions. Newton? Forgotten. No, people have Unix in their pocket, only it's a bloated successor of Unix.

The efforts to fix and improve Unix -- Plan 9, Inferno -- forgotten. A proprietary microkernel Unix-like OS for phones -- Blackberry 10, based on QNX -- not Androidy enough, and bombed.

We have less and less choice, made from worse parts on worse foundations -- but it's colourful and shiny and the world loves it.

That makes me despair.

We have poor-quality tools, built on poorly-designed OSes, running on poorly-designed chips. Occasionally, fragments of older better ways, such as functional-programming tools, or Lisp-based development environments, are layered on top of them, but while they're useful in their way, they can't fix the real problems underneath.

Occasionally someone comes along and points this out and shows a better way -- such as Curtis Yarvin's Urbit. Lisp Machines re-imagined for the 21st century, based on top of modern machines. But nobody gets it, and its programmer has some unpleasant and unpalatable ideas, so it's doomed.

And the kids who grew up after C won the battle deride the former glories, the near-forgotten brilliance that we have lost.

And it almost makes me want to cry sometimes.

We should have brilliant machines now, not merely Steve Jobs' "bicycles for the mind", but Gossamer Albatross-style hang-gliders for the mind.

But we don't. We have glorified 8-bits. They multitask semi-reliably, they can handle sound and video and 3D and look pretty. On them, layered over all the rubbish and clutter and bodges and hacks, inspired kids are slowly brute-forcing machines that understand speech, which can see and walk and drive.

But it could have been so much better.

Charles Babbage didn't finish the Difference Engine. It would have paid for him to build his Analytical Engine, and that would have given the Victorian British Empire the steam-driven computer, which would have transformed history.

But he got distracted and didn't deliver.

We started to build what a few old-timers remember as brilliant machines, machines that helped their users to think and to code, with brilliant -- if flawed -- software written in the most sophisticated computer languages yet devised, by the popular acclaim of the people who really know this stuff: Lisp and Smalltalk.

But we didn't pursue them. We replaced them with something cheaper -- with Unix machines, an OS only a nerd could love. And then we replaced the Unix machines with something cheaper still -- the IBM PC, a machine so poor that the £125 ZX Spectrum had better graphics and sound.

And now, we all use descendants of that. Generally acknowledged as one of the poorest, most-compromised machines, based on descendants of one of the poorest, most-compromised CPUs.

Yes, over the 40 years since then, most of rough edges have been polished out. The machines are now small, fast, power-frugal with tons of memory and storage, with great graphics and sound. But it's taken decades to get here.

And the OSes have developed. Now they're feature-rich, fairly friendly, really very robust considering the stone-age stuff they're built from.

But if we hadn't spent 3 or 4 decades making a pig's ear into silk purse -- if we'd started with a silk purse instead -- where might we have got to by now?

April 27, 2016 05:06 PM

April 26, 2016

Jonathan McDowell

Notes on Kodi + IR remotes

This post is largely to remind myself of the details next time I hit something similar; I found bits of relevant information all over the place, but not in one single location.

I love Kodi. These days the Debian packages give me a nice out of the box experience that is easy to use. The problem comes in dealing with remote controls and making best use of the available buttons. In particular I want to upgrade the VDR setup my parents have to a more modern machine that’s capable of running Kodi. In this instance an AMD E350 nettop, which isn’t recent but does have sufficient hardware acceleration of video decoding to do the job. Plus it has a built in fintek CIR setup.

First step was finding a decent remote. The fintek is a proper IR receiver supported by the in-kernel decoding options, so I had a lot of flexibility. As it happened I ended up with a surplus to requirements Virgin V Box HD remote (URC174000-04R01). This has the advantage of looking exactly like a STB remote, because it is one.

Pointed it at the box, saw that the fintek_cir module was already installed and fired up irrecord. Failed to get it to actually record properly. Googled lots. Found ir-keytable. Fired up ir-keytable -t and managed to get sensible output with the RC-5 decoder. Used irrecord -l to get a list of valid button names and proceed to construct a vboxhd file which I dropped in /etc/rc_keymaps/. I then added a

fintek-cir * vboxhd

line to /etc/rc_maps.cfg to force my new keymap to be loaded on boot.

That got my remote working, but then came the issue of dealing with the fact that some keys worked fine in Kodi and others didn’t. This seems to be an issue with scancodes above 0xff. I could have remapped the remote not to use any of these, but instead I went down the inputlirc approach (which is already in use on the existing VDR box).

For this I needed a stable device file to point it at; the /dev/input/eventN file wasn’t stable and as a platform device it didn’t end up with a useful entry in /dev/input/by-id. A ‘quick’

udevadm info -a -p $(udevadm info -q path -n /dev/input/eventN)

provided me with the PNP id (FIT0002) allowing me to create /etc/udev/rules.d/70-remote-control.rules containing

KERNEL=="event*",ATTRS{id}=="FIT0002",SYMLINK="input/remote"

Bingo, a /dev/input/remote symlink. /etc/defaults/inputlirc ended up containing:

EVENTS="/dev/input/remote"
OPTIONS="-g -m 0"

The options tell it to grab the device for its own exclusive use, and to take all scancodes rather than letting the keyboard ones through to the normal keyboard layer. I didn’t want anything other than things specifically configured to use the remote to get the key presses.

At this point Kodi refused to actually do anything with the key presses. Looking at ~kodi/.kodi/temp/kodi.log I could see them getting seen, but not understood. Further searching led me to construct an Lircmap.xml - in particular the piece I needed was the <remote device="/dev/input/remote"> bit. The existing /usr/share/kodi/system/Lircmap.xml provided a good starting point for what I wanted and I dropped my generated file in ~kodi/.kodi/userdata/.

(Sadly it turns out I got lucky with the remote; it seems to be using the RC-5x variant which was broken in 3.17; works fine with the 3.16 kernel in Debian 8 (jessie) but nothing later. I’ve narrowed down the offending commit and raised #117221.)

Helpful pages included:

April 26, 2016 08:32 PM

April 25, 2016

Liam Proven

Acorn: from niche to forgotten obscurity and total industry dominance at the same time

More retrocomputing meanderings -- whatever became of the ST, Amiga and Acorn operating systems?

The Atari ST's GEM desktop also ran on MS-DOS, DR's own DOS+ (a forerunner of the later DR-DOS) and today is included with FreeDOS. In fact the first time I installed FreeDOS I was *very* surprised to find my name in the credits. I debugged some batch files used in installing the GEM component.

The ST's GEM was the same environment. ST GEM was derived from GEM 1; PC GEM from GEM 2, crippled after an Apple lawsuit. Then they diverged. FreeGEM attempted to merge them again.

But the ST's branch prospered, before the rise of the PC killed off all the alternative platforms. Actual STs can be quite cheap now, or you can even buy a modern clone:

http://harbaum.org/till/mist/index.shtml

If you don't want to lash out but have a PC, the Aranym environment gives you something of the feel of the later versions. It's not exactly an emulator, more a sort of compatibility environment that enhances the "emulated" machine as much as it can using modern PC hardware.

http://aranym.org/

And the ST GEM OS was so modular, different 3rd parties cloned every components, separately. Some commercially, some as FOSS. The Aranym team basically put together a sort of "distribution" of as many FOSS components as they could, to assemble a nearly-complete OS, then wrote the few remaining bits to glue it together into a functional whole.

So, finally, after the death of the ST and its clones, there was an all-FOSS OS for it. It's pretty good, too. It's called AFROS, Atari Free OS, and it's included as part of Aranym.

I longed to see a merger of FreeGEM and Aranym, but it was never to be.

The history of GEM and TOS is complex.

Official Atari TOS+GEM evolved into TOS 4, which included the FOSS Mint multitasking later, which isn't much like the original ROM version of the first STs.

The underlying TOS OS is not quite like anything else.

AIUI, CP/M-68K was a real, if rarely-seen, OS.

However, it proved inadequate to support GEM, so it was discarded. A new kernel was written using some of the tech from what was later to become DR-DOS on the PC -- something less like CP/M and more like MS-DOS: directories, separated with backslashes; FAT format disks; multiple executable types, 8.3 filenames, all that stuff.

None of the command-line elements of CP/M or any DR DOS-like OS were retained -- the kernel booted the GUI directly and there was no command line, like on the Mac.

This is called GEMDOS and AIUI it inherits from both the CP/M-68K heritage and from DR's x86 DOS-compatible OSes.

The PC version of GEM also ran on Acorn's BBC Master 512 which had an Intel 80186 coprocessor. It was a very clever machine, in a limited way.

Acorn's series of machines are not well-known in the US, AFAICT, and that's a shame. They were technically interesting, more so IMHO than the Apple II and III, TRS-80 series etc.

The original Acorns were 6502-based, but with good graphics and sound, a plethora of ports, a clear separation between OS, BASIC and add-on ROMs such as the various DOSes, etc. The BASIC was, I'd argue strongly, *the* best 8-bit BASIC ever: named procedures, local variables, recursion, inline assembler, etc. Also the fastest BASIC interpreter ever, and quicker than some compiled BASICs.

Acorn built for quality, not price; the machines were aimed at the educational market, which wasn't so price-sensitive, a model that NeXT emulated. Home users were welcome to buy them & there was one (unsuccessful) home model, but they were unashamedly expensive and thus uncompromised.

The only conceptual compromise in the original BBC Micro was that there was provision for ROM bank switching, but not RAM. The 64kB memory map was 50:50 split ROM and RAM. You could switch ROMs, or put RAM in their place, but not have more than 64kB. This meant that the high-end machine had only 32kB RAM, and high-res graphics modes could take 21kB or so, leaving little space for code -- unless it was in ROM, of course.

The later BBC+ and BBC Master series fixed that. They also allowed ROM cartridges, rather than bare chips inserted in sockets on the main board, and a numeric keypad.

Acorn looked at the 16-bit machines in the mid-80s, mostly powered by Motorola 68000s of course, and decided they weren't good enough and that the tiny UK company could do better. So it did.

But in the meantime, it kept the 6502-based, resolutely-8-bit BBC Micro line alive with updates and new models, including ROM-based terminals and machines with a range of built-in coprocessors: faster 6502-family chips for power users, Z80s for CP/M, Intel's 80186 for kinda-sorta PC compatibility, the NatSemi 32016 with PANOS for ill-defined scientific computing, and finally, an ARM copro before the new ARM-based machines were ready.

Acorn designed the ARM RISC chip in-house, then launched its own range of ARM-powered machines, with an OS based on the 6502 range's. Although limited, this OS is still around today and can be run natively on a Raspberry Pi:

https://www.riscosopen.org/content/

It's very idiosyncratic -- both the filesystem, the command line and the default editor are totally unlike anything else. The file-listing command is CAT, the directory separator is a full stop (i.e. a period), while the root directory is called $. The editor is a very odd dual-cursor thing. It's fascinating, totally unrelated to the entire DEC/MS-DOS family and to the entire Unix family. There is literally and exactly nothing else even slightly like it.

It was the first GUI OS to implement features that are now universal across GUIs: anti-aliased font rendering, full-window dragging and resizing (as opposed to an outline), and significantly, the first graphical desktop to implement a taskbar, before NeXTstep and long before Windows 95.

It supports USB, can access the Internet and WWW. There are free clients for chat, email, FTP, the WWW etc. and a modest range of free productivity tools, although most things are commercial.

But there's no proper inter-process memory protection, GUI multitasking is cooperative, and consequently it's not amazingly stable in use. It does support pre-emptive multitasking, but via the text editor, bizarrely enough, and only of text-mode apps. There was also a pre-emptive multitasking version of the desktop, but it wasn't very compatible, didn't catch on and is not included in current versions.

But saying all that, it's very interesting, influential, shared-source, entirely usable today, and it runs superbly on the £25 Raspberry Pi, so there is little excuse not to try it. There's also a FOSS emulator which can run the modern freeware version:

http://www.marutan.net/rpcemu/

For users of the old hardware, there's a much more polished commercial emulator for Windows and Mac which has its own, proprietary fork of the OS:

http://www.virtualacorn.co.uk/index2.htm

There's an interesting parallel with the Amiga. Both Acorn and Commodore had ambitious plans for a modern multitasking OS which they both referred to as Unix-like. In both cases, the project didn't deliver and the ground-breaking, industry-redefiningly capable hardware was instead shipped with much less ambitious OSes, both of which nonetheless were widely-loved and both of which still survive in the form of multiple, actively-maintained forks, today, 30 years later -- even though Unix in fact caught up and long surpassed these 1980s oddballs.

AmigaOS, based in part on the academic research OS Tripos, has 3 modern forks: the FOSS AROS, on x86, and the proprietary MorphOS and AmigaOS 4 on PowerPC.

Acorn RISC OS, based in part on Acorn MOS for the 8-bit BBC Micro, has 2 contemporary forks: RISC OS 5, owned by Castle Technology but developed by RISC OS Open, shared source rather than FOSS, running on Raspberry Pi, BeagleBoard and some other ARM boards, plus some old hardware and RPC Emu; and RISC OS 4, now owned by the company behind VirtualAcorn, run by an ARM engineer who apparently made good money selling software ARM emulators for x86 to ARM holdings.

Commodore and the Amiga are both long dead and gone, but the name periodically changes hands and reappears on various bits of modern hardware.

Acorn is also long dead, but its scion ARM Holdings designs the world's most popular series of CPUs, totally dominates the handheld sector, and outsells Intel, AMD & all other x86 vendors put together something like tenfold.

Funny how things turn out.

April 25, 2016 01:55 PM

April 18, 2016

Jonathan McDowell

Going to DebConf 16

Going to DebConf16

Whoop! Looking forward to it already (though will probably spend it feeling I should be finishing my dissertation).

Outbound:

2016-07-01 15:20 DUB -> 16:45 LHR BA0837
2016-07-01 21:35 LHR -> 10:00 CPT BA0059

Inbound:

2016-07-10 19:20 CPT -> 06:15 LHR BA0058
2016-07-11 09:20 LHR -> 10:45 DUB BA0828

(image stolen from Gunnar)

April 18, 2016 01:12 PM

April 13, 2016

Jonathan McDowell

Software in the Public Interest contributing members: Check your activity status!

That’s a longer title than I’d like, but I want to try and catch the attention of anyone who might have missed more directed notifications about this. If you’re not an SPI contributing member there’s probably nothing to see here…

Although I decided not to stand for re-election at the Software in the Public Interest (SPI) board elections last July, I haven’t stopped my involvement with the organisation. In particular I’ve spent some time working on an overhaul of the members website and rolling it out. One of the things this has enabled is implementation of 2009-11-04.jmd.1: Contributing membership expiry, by tracking activity in elections and providing an easy way for a member to indicate they consider themselves active even if they haven’t voted.

The plan is that this will run at some point after the completion of every board election. A first pass of cleanups was completed nearly a month ago, contacting all contributing members who’d never been seen to vote and asking them to update their status if they were still active. A second round, of people who didn’t vote in the last board election (in 2014), is currently under way. Affected members will have been emailed directly and there was a mail to spi-announce, but I’m aware people often overlook these things or filter mail off somewhere that doesn’t get read often.

If you are an SPI Contributing member who considers themselves an active member I strongly recommend you login to the SPI Members Website and check the “Last active” date displayed is after 2014-07-14 (i.e. post the start of the last board election). If it’s not, click on the “Update” link beside the date. The updated date will be shown once you’ve done so.

Why does pruning inactive members matter? The 2015 X.Org election results provide at least one indication of why ensuring you have an engaged membership is important - they failed to make a by-laws change that a vast majority of votes were in favour of, due to failing to make quorum. (If you’re an X.org member, go vote!)

April 13, 2016 12:04 PM

April 12, 2016

Jess Rowbottom

Isn’t It Bleeding Obvious?

Having a bit of spare time on my hands over Christmas last year I started writing music again, all leading up to an album release on 17th November 2016, a year after the first song was written. It’s a bit of a mix of genres and styles, but I play most things on there, involving talented pals when it feels right.

You can find out more about it (including a rather nice interview written by student journo Andy Carson) over on The Bleeding Obvious website.

There’s also a Facebook page, and a Twitter stream – of course there is, this is 2016…

by Jess at April 12, 2016 07:16 AM

April 06, 2016

Andy Smith (strugglers.net)

rsync and sudo conundrum

Scenario:

  • You’re logged in to hostA
  • You need to rsync some files from hostB to hostA
  • The files on hostB are only readable by root and they must be written by root locally (hostA)
  • You have sudo access to root on both
  • You have ssh public key access to both
  • root can’t ssh between the two

Normally you’d do this:

hostA$ rsync -av hostB:/foo/ /foo/

but you can’t because your user can’t read /foo on hostB.

So then you might try making rsync run as root on hostB:

hostA$ rsync --rsync-path='sudo rsync' -av hostB:/foo/ /foo/

but that fails because ssh needs a pseudo-terminal to ask you for your sudo password on hostB:

sudo: no tty present and no askpass program specified
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: error in rsync protocol data stream (code 12) at io.c(226) [Receiver=3.1.1]

So then you can try giving it an askpass program:

hostA$ rsync \
       --rsync-path='SUDO_ASKPASS=/usr/bin/ssh-askpass sudo rsync' \
       -av hostB:/foo/ /foo/

and that nearly works! It pops up an askpass dialog (so you need X11 forwarding) which takes your password and does stuff as root on hostB. But ultimately fails because it’s running as your unprivileged user locally (hostA) and can’t write the files. So then you try running the lot under sudo:

hostA$ sudo rsync \
       --rsync-path='SUDO_ASKPASS=/usr/bin/ssh-askpass sudo rsync' \
       -av hostB:/foo/ /foo/

This fails because X11 forwarding doesn’t work through the local sudo. So become root locally first, then tell rsync to ssh as you:

hostA$ sudo -i
hostA# rsync \
       -e 'sudo -u youruser ssh' \
       --rsync-path 'SUDO_ASKPASS=/usr/bin/ssh-askpass sudo rsync'\
       -av hostB:/foo /foo

Success!

Answer cobbled together with help from dutchie, dne and dg12158. Any improvements? Not needing X11 forwarding would be nice.

Alternate methods:

  • Use tar:
    $ ssh \
      -t hostB 'sudo tar -C /foo -cf - .' \
      | sudo tar -C /foo -xvf -
  • Add public key access for root
  • Use filesystem ACLs to allow unprivileged user to read files on hostB.

by Andy at April 06, 2016 02:21 PM

April 05, 2016

Liam Proven

The AmigaOS lives on! It's up to 4.1 now. But is there any point today?

I am told it's lovely to use. Sadly, it only runs on obscure PowerPC-based kit that costs a couple of thousand pounds and can be out-performed by
a £300 PC.

AmigaOS's owners -- Hyperion, I believe -- chose the wrong platform.

On a Raspberry Pi or something, it would be great. On obscure expensive PowerPC kit, no.

Also, saying that, I got my first Amiga in the early 2000s. If I'd had one 15y earlier, I'd probably have loved it, but I bought a 2nd hand
Archimedes instead (and still think it was the right choice for a non-gamer and dabbler in programming).

A few years ago, with a LOT of work using 3 OSes and 3rd-party disk-management tools, I managed to coax MorphOS onto my Mac mini G4.
Dear hypothetical gods, that was a hard install.

It's... well, I mean, it's fairly fast, but... no Wifi? No Bluetooth?

And the desktop. It got hit hard with the ugly stick. I mean, OK, it's not as bad as KDE, but... ick.

Learning AmigaOS when you already know more modern OSes -- OS X, Linux, gods help us, even Windows -- well, the Amiga seems pretty
weird, and often for no good reason. E.g. a graphical file manager, but not all files have icons. They're not hidden, they just don't have
icons, so if you want to see them, you have to do a second show-all operation. And the dependence on RAMdisks, which are a historical curiosity now. And the needing to right-click to show the menu-bar when it's on a screen edge.

A lot of pointless arcana, just so Apple didn't sue, AFAICT.

I understand the love if one loved it back then. But now? Yeeeeeeaaaaaah, not so much.

Not that I'm proclaiming RISC OS to be the business now. I like it, but it's weird too. But AmigaOS does seem a bit primitive now. OTOH, if they sorted out multiprocessor support and memory protection and it ran on cheap ARM kit, then yeah, I'd be interested.

April 05, 2016 12:57 PM

March 31, 2016

Aled Treharne (ThinkSIP)

Twitter debate

As part of the run up to UCExpo I’m going to be taking part in my first Twitter debate this afternoon (Tweebate?). I’ll be taking over the SIPHON twitter account: @SIPHON_Networks.

Feel free to fire me your questions – the theme for the debate is going to the Future of Communications Security, which gives rise to the hashtag for this debate: #comsecfuture

by Aled Treharne at March 31, 2016 12:26 PM

March 30, 2016

Liam Proven

This will get me accused of fanboyism (again), but like it or not, Apple shaped the PC industry.

I recently read that a friend of mine claimed that "Both the iPhone and iPod were copied from other manufacturers, to a large extent."

This is a risible claim, AFAICS.

There were pocket MP3 jukeboxes before the iPod. I still own one. They were fairly tragic efforts.

There were smartphones before the iPhone. I still have at least one of them, too. Again, really tragic from a human-computer interaction point of view.


AIUI, the iPhone originated internally as a shrunk-down tablet. The tablet originated from a personal comment from Bill Gates to Steve Jobs that although tablets were a great idea, people simply didn’t want tablets because Microsoft had made them and they didn’t sell.
Jobs’ response was that the Microsoft ones didn’t sell because they were no good, not because people didn’t want tablets. In particular, Jobs stated that using a stylus was a bad idea. (This is also a pointer was to why he cancelled the Newton. And guess what? I've got one of them, too.)

Gates, naturally, contested this, and Jobs started an internal project to prove him wrong: a stylus-free finger-operated slim light tablet. However, when it was getting to prototype form, he allegedly realised, with remarkable prescience, that the market wasn’t ready yet, and that people needed a first step — a smaller, lighter, simpler, pocketable device, based on the finger-operated tablet.

Looking for a role or function for such a device, the company came up with the idea of a smartphone.

Smartphones certainly existed, but they were a geek toy, nothing more.

Apple was bold enough to make a move that would kill its most profitable line — the iPod — with a new product. Few would be so bold.

I can’t think of any other company that would have been bold enough to invent the iPhone. We might have got to devices as capable as modern smartphones and tablets, but I suspect they’d have still been festooned in buttons and a lot clumsier to use.

It’s the GUI story again. Xerox sponsored the invention and original development but didn’t know WTF to do with it. Contrary to the popular history, it did productise it, but as a vastly expensive specialist tool. It took Apple to make it the standard method of HCI, and it took Apple two goes and many years. The Lisa was still too fancy and expensive, and the original Mac too cut-down and too small and compromised.

The many rivals’ efforts were, in hindsight, almost embarrassingly bad. IBM’s TopView was a pioneering GUI and it was rubbish. Windows 1 and 2 were rubbish. OS/2 1.x was rubbish, and to be honest, OS/2 2.x was the pre-iPhone smartphone of GUI OSes: very capable, but horribly complex and fiddly.

Actually, arguably — and demonstrably, from the Atari ST market — DR GEM was a far better GUI than Windows 1 or 2. GEM was a rip-off of the Mac; the PC version got sued and crippled as a result, so blatant was it. It took MS over a decade to learn from the Mac (and GEM) and produce the first version of Windows with a GUI good enough to rival the Mac’s, while being different enough not to get sued: Windows 95.

Now, 2 decades later, everyone’s GUI borrows from Win95. Linux is still struggling to move on from Win95-like desktops, and even Mac OS X, based on a product which inspired Win95, borrows some elements from the Win95 GUI.

Everyone copies MS, and MS copies Apple. Apple takes bleeding-edge tech and turns geek toys into products that the masses actually want to buy.

Microsoft’s success is founded on the IBM PC, and that was IBM’s response to the Apple ][.

Apple has been doing this consistently for about 40 years. It often takes it 2 or 3 goes, but it does.

  • First time: 8-bit home micros (the Apple ][, an improved version of a DIY kit.)

  • Second time: GUIs (first the Lisa, then the Mac).

  • Third time: USB (on the iMac, arguably the first general-purpose PC designed and sold for Internet access as its primary function).

  • Fourth time: digital music players (the iPod wasn’t even the first with a hard disk).

  • Fifth time: desktop Unix (OS X, based on NeXTstep).

  • Sixth time: smartphones (based on what became the iPad, remember).

  • Seventh time: tablets (the iPad, actually progenitor of the iPhone rather than the other way round).

Yes, there are too many Mac fans, and they’re often under-informed. But there are also far to many Microsoft apologists, and too many Linux ones, too.

I use an Apple desktop, partly because with a desktop, I can choose my own keyboard and pointing device. I hate modern Apple ones.

I don’t use Apple laptops or phones. I’ve owned multiple examples of both. I prefer the rivals.

My whole career has been largely propelled by Microsoft products. I still use some, although my laptops run Linux, which I much prefer.

I am not a fanboy of any of them, but sadly, anyone who expresses fondness or admiration for anything Apple will be inevitably branded as one by the Anti-Apple fanboys, whose ardent advocacy is just as strong and just as irrational.

As will this.

March 30, 2016 06:33 PM

March 26, 2016

Jonathan McDowell

Dr Stoll: Or how I learned to stop worrying and love the GPL

[I wrote this as part of BelFOSS but I think it’s worth posting here.]

My Free Software journey starts with The Cuckoo’s Egg. Back in the early 90s a family friend suggested I might enjoy reading it. He was right; I was fascinated by the world of interconnected machines it introduced me to. That helped start my involvement in FidoNet, but it also got me interested in Unix. So when I saw a Linux book at the Queen’s University bookshop (sadly no longer with us) with a Slackware CD in the back I had to have it.

The motivation at this point was to have a low cost version of Unix I could run on the PC hardware I already owned. I had no knowledge of the GNU Project before this point, and as I wasn’t a C programmer I had no interest in looking at the source code. I spent some time futzing around with it and that partition (I was dual booting with DOS 6.22) fell into disuse. It wasn’t until I’d learnt some C and turned up to university, which provided me with an internet connection and others who were either already using Linux or interested in doing so, that I started running a Linux box full time.

Once I was doing that I became a lot more interested in the Open Source side of the equation. Rather than running a closed operating system that even the API for wasn’t properly specified (or I wouldn’t have needed my copy of Undocumented DOS) I had the complete source to both the underlying OS and all the utilities that it was using. For someone doing a computer science degree this was invaluable. Minix may have been the OS discussed in the OS Design module I studied, but Linux was a much more feature complete option that I was running on my desktop and could also peer under the hood of.

In my professional career I’ve always welcomed the opportunities to work with Open Source. A long time ago I experienced a particularly annoying issue when writing a device driver under QNX. The documentation didn’t seem to match the observed behaviour of the subsystem I was interfacing with. However due to licensing issues only a small number of people in the organisation were able to actually look at the QNX source. So I ended up wasting a much more senior engineer’s time with queries like “I think it’s actually doing x, y and z instead of a, b and c; can you confirm?”. Instances where I can look directly at the source code myself make me much more productive.

Commercial development also started to make me more understanding of the Free Software nature of the code I was running. It wasn’t just the ability to look at the code which was useful, but also the fact there was no need to reinvent the wheel. Need a base OS to build an appliance on? Debian ensures that the main component is Free for all usage. No need to worry about rolling your own compilers, base libraries etc. From a commercial perspective that allows you to concentrate on the actual product. And when you hit problems, the source is available and you can potentially fix it yourself or at least more easily find out if there’s been a fix for that issue released (being able to see code development in version control systems rather than getting a new upstream release with a whole heap on unrelated fixes in it really helps with that).

I had thus progressed from using FLOSS because it was free-as-in-beer, to appreciating the benefits of Open Source in my own learning and employment experiences, to a deeper understanding of the free-as-in-speech benefits that could be gained. However at this point I was still thinking very much from a developer mindset. Even my thoughts about how users can benefit from Free Software were in the context of businesses being able to easily switch suppliers or continue to maintain legacy software because they had the source to their systems available.

One of the major factors that has helped me to see beyond this is the expansion of the Internet of Things (IoT). With desktop or server software there is by and large a choice about what to use. This is not the case with appliances. While manufacturers will often produce a few revisions of software for their devices, usually eventually there is a newer and shiny model and the old one is abandoned. This is problematic for many reasons. For example, historically TVs have been long lived devices (I had one I bought second hand that happily lasted me 7+ years). However the “smart” capabilities of the TV I purchased in 2012 are already of limited usefulness, and LG have moved on to their current models. I have no intention of replacing the device any time soon, so have had to accept it is largely acting as a dumb display. More serious is the lack of security updates. For a TV that doesn’t require a network connection to function this is not as important, but the IoT is a trickier proposition. For example Matthew Garrett had an awful experience with some ‘intelligent’ light bulbs, which effectively circumvented any home network security you might have set up. The manufacturer’s defence? No longer manufactured or supported.

It’s cases like these that have slowly led me to a more complete understanding of the freedom that Free Software truly offers to users. It’s not just about cost free/low cost software. It’s not just about being able to learn from looking at the source to the programs you are running. It’s not even about the freedom to be able to modify the programs that we use. It’s about giving users true Freedom to use and modify their devices as they see fit. From this viewpoint it is much easier to understand the protections against Tivoization that were introduced with GPLv3, and better appreciate the argument sometimes made that the GPL offers more freedom than BSD style licenses.

March 26, 2016 04:28 PM

March 21, 2016

Liam Proven

Confessions of a Sinclair fan

I'm very fond of Spectrums (Spectra?) because they're the first computer I owned. I'd used my uncle's ZX-81, and one belonging to a neighbour, and Commodore PETs at school, but the PET was vastly too expensive and the ZX-81 too limited to be of great interest to me.

I read an article once that praised Apple for bringing home computers to the masses with the Apple ][, the first home computer for under US$ 1000. A thousand bucks? That was fantasy winning-the-football-pools money!

No, for me, the hero of the home computer revolution was Sir Clive Sinclair, for bringing us the first home computer for under GB £100. A hundred quid was achievable. A thousand would have gone on a newer car or a family holiday.

In 1982, my parents could just about afford a 2nd hand 48k Spectrum. I think they paid £80 for it, postage included. I was so excited to receive it, I couldn't wait to try it out. Rearranging a corner with a desk and a portable TV would take too long. So I lay on the lounge floor, Spectrum plugged into the family colour TV and sitting on the carpet. Said carpet, of course, blocked the vents on the bottom of the Speccy so it overheated and died in half an hour.

Happily, it was under guarantee. I sent it back, the original owner got a warranty repair, returned it to me, and I took much better care of it after that.

I learned BASIC and programming on it. My favourite program was Beta BASIC, which improved the language and the editor. I wrote programs under Beta BASIC, being careful to use a subset of BASIC that I could compile with HiSoft BASIC for the best of the Speccy's meagre performance.

I put it in an LMT 68FX2 keyboard for it.

Then an Interface 1 and a microdrive. A terrible storage system. I told myself I was used to Sinclair cost-cutting and it would be OK. It wasn't. It was slow and unreliable and the sub-100 kB capacity was crap. I bought special formatting tools to get more capacity, and the reliability got even worse. My watchword became "save 2 copies of everything!" It still stands me in good stead today, when Microsoft Word crashes occasionally corrupt a document or I absent-mindedly save over something important.

So I replaced the 48k Spectrum with a discount ex-demo 128, bought from Curry's. I could save work-in-progress programs to the RAMdisk, then onto Microdrive when they sort of worked. Annoyingly it wouldn't fit into the keyboard. I put the 48's PCB back into its old case and sold it, and mothballed the keyboard. To my surprise and joy, I found it in 2014 when packing up my house to move abroad. It now has a Raspberry Pi 2 in it, and any day now I will fit my new RasPi 3 into it for extra WLAN goodness.

At Uni, I bought an MGT +D and a 5¼" floppy, plus a cheap Panasonic 9-pin dot-matrix printer. The luxury of fast, reliable storage -- wow! 780kB per disk! Yes, the cool kids had the fancy new 3½" drives, but they cost more and the media were 10x more expensive and I was a poor student.

The +D was horrendously unreliable, and MGT were real stars. They invited me to their Cambridge office where Alan Miles plied me with coffee, showed me around and chatted while Bruce Gordon fixed my interface. The designer himself! How's that for customer service?

I am not sure now, 30 years later, but I think they gave me a very cheap deal on a DISCiPLE because they couldn't get the +D to run reliably. Total stars. I later bought a SAM Coupé out of loyalty, but lovely machine as it was, my Acorn Archimedes A310 was my real love by then. There was really no comparison between even one of the best-designed 8-bit home computers ever and a 32-bit RISC workstation.

I wrote my uni essays on that Spectrum 128; I was the only person in my year at Uni to have their own computer!

Years later, I bought a second 128 from an ad in Micro Mart, just to get the super-rare numeric keypad, Spanish keycaps and all. I sold the computer and kept the keypad.

So I was a Sinclair fan because their low cost meant I could slowly, piecemeal, acquire and expand the machines and peripherals. I never had the money for an up-front purchase of a machine with a decent keyboard and a good BASIC and a disk interface, such as a BBC Micro, much as I would have liked one. I was never interested in the C64 because its BASIC was so poor.

The modern fascination with them mystifies me a bit. I loved mine because it was cheap enough to be accessible to me; those of its limitations that I couldn't fix, such as the poor graphics, or the lack of integer variables and consequent poor BASIC performance, really annoyed me. The crappy keyboard, I replaced. Then the crappy (but, yes, impressively cheap) mass storage: replaced. The BASIC, kinda sorta replaced. Subsequent additions: proper mass storage, better DOS, proper dot-matrix printer on proper Centronics interface.

Later, I even replaced the DOS in my DISCiPLE with Uni-DOS, replacing sequential file handling (which I never used) with subdirectories (which were massively handy. I mean, nearly a megabyte of storage!)

I was never much of a gamer. I'm still not. At school, I collected and swapped games like some kids collect stamps -- the objective was to own a good collection, not to play them. I usually tried each game a couple of times, but no more. Few kept my attention: The Hobbit, the Stranglers' Aural Quest, The Valley, Jet-Pac. Some of the big hits that everyone else loved, like Manic Miner and Jet Set Willy, I hated. Irritating music, very hard gameplay, and so repetitive.

And yet now, people are so nostalgic for the terrible keyboard, they crowd-funded a new version! One of the first things I replaced, it was so bad! There are new models, new hardware, all to play the to be honest really quite bad games. Poor graphics, lousy sound on the 48. And yet everyone rhapsodises about them.

I agree that, back then, game design was more innovative and gameplay often more varied and interesting than it is today. Now, the graphics look amazing but there seem to me to be about half a dozen different basic styles of gameplay, but with different plots, visuals and soundtracks. Where is the innovation of the level of The Sentinel or Elite or Marble Madness or Q-Bert?

I have a few times played Jet-Pac in an emulator, but I am not a retro-gamer. I enjoy playing with my Toastrack, immaculately restored by Mutant Caterpillar, and my revivified LMT 68FX2, given a brain-transplant by Tynemouth Software. The things I loved about my Sinclairs seem to be forgotten now, and modern Spectrum aficionadi as nostalgic about the very things I resented -- the poor graphics and bargain-basement sound -- or replaced: the rotten keyboard. It is so weird that I can't relate to it, but hey, I'm happy that the machines still exist and that there's an active user community.

March 21, 2016 01:57 PM

March 13, 2016

Alex Bloor

Part 3: Postmortem of a Kickstarter campaign; Camsformer

This is part three! In Part 1, we looked at the campaign, its press coverage, what the project promised, and why it interested me..  In part 2 we moved onto the first seeds of doubt… Then a silence descended on … Continue reading

by Alex Bloor at March 13, 2016 07:01 PM

March 12, 2016

Alex Bloor

Part 2: Postmortem of a Kickstarter campaign; Camsformer

This is part two! In Part 1, we looked at the campaign, its press coverage, what the project promised, and why it interested me.. Now we move onto the first seeds of doubt… HOT PRODUCT OR HOT AIR? The first … Continue reading

by Alex Bloor at March 12, 2016 10:49 AM

March 08, 2016

Alex Bloor

Part 1: Postmortem of a Kickstarter campaign; CamsFormer

Kickstarter has had some wonderful successes and some terrible failures. Perhaps the most famous example of the latter is that of “Zano“, the drone that raised millions and only delivered a handful of units, which didn’t work well. On the … Continue reading

by Alex Bloor at March 08, 2016 06:40 PM

February 29, 2016

Liam Proven

Floppies and hard disks and ROMs, oh my! Or why early micros couldn't boot from HD

In lieu of real content, a repurposed FB comment, 'cos I thought it stood alone fairly well. I'm meant to be writing about containers and the FB comment was a displacement activity.



The first single-user computers started to appear in the mid-1970s, such as the MITS Altair. These had no storage at all in their most minimal form -- you entered code into their few hundreds of bytes of memory (not MB, not kB, just 128 bytes or so.)

One of the things that was radical is that they had a microprocessor: the CPU was a single chip. Before that, processors were constructed from lots of components, e.g. the KENBAK-1.

A single-user desktop computer with a microprocessor was called a microcomputer.

So, in the mid- to late-1970s, hard disks were *extremely* expensive -- thousands of $/£, more than the computer itself. So nobody fitted them to microcomputers.

Even floppy drives were quite expensive. They'd double the price of the computer. So the first mass-produced "micros" saved to audio tape cassette. No disk drive, no disk controller -- it was left out to save costs.

If the machine was modular enough, you could add a floppy disk controller later, and plug a floppy drive into that.

With only tape to load and save from, working at 1200 bits per second or so, even small programs of a few kB took minutes to load. So the core software was built into a permanent memory chip in the computer, called a ROM. The computer didn't boot: you turned it on, and it started running the code in the ROM. No loading stage necessary, but you couldn't update or change it without swapping chips. Still, it was really tiny, so bugs were not a huge problem.

Later, by a few years into the 1980s, floppy drives fell in price so that high-end micros had them as a common accessory, although still not built in as standard for most.

But the core software was still on a ROM chip. They might have a facility to automatically run a program on a floppy, but you had to invoke a command to trigger it -- the computer couldn't tell when you inserted a diskette.

By the 16-bit era, the mid-1980s, 3.5" drives were cheap enough to bundle as standard. Now, the built-in software in the ROM just had to be complex enough to start the floppy drive and load the OS from there. Some machines still kept the whole OS in ROM though, such as the Atari ST and Acorn Archimedes. Others, like the Commodore Amiga, IBM PC & Apple Macintosh, loaded it from diskette.

Putting it on diskette was cheaper, it meant you could update it easily, or even replace it with alternative OSes -- or for games, do without an OS altogether and boot directly into the game.

But hard disks were still seriously expensive, and needed a separate hard disk controller to be fitted to the machine. Inexpensive home machines like the early or basic-model Amigas and STs didn't have one -- again, it was left out for cost-saving reasons.

On bigger machines with expansion slots, you could add a hard disk controller and it would have a ROM chip on it that added the ability to boot from a hard disk connected to the controller card. But if your machine was a closed box with no internal slots, it was often impossible to add such a controller, so you might get a machine which later in its life had a hard disk controller and drive added, but the ROMs couldn't be updated so it wasn't possible to boot from the hard disk.

But this was quite rare. The 2nd ever model of Mac, the Mac Plus, added SCSI ports, the PC was always modular, and the higher-end models of STs, Amigas and Archimedes had hard disk interfaces.

The phase of machines with HDs but booting from floppy was fairly brief and they weren't common.

If the on-board ROMs could be updated, replaced, or just supplemented with extra ones in the HD controller, you could add the ability to boot from HD. If the machine booted from floppy anyway, this wasn't so hard.



Which reminds me -- I am still looking for an add-on hard disk for an Amstrad PCW, if anyone knows of such a thing!

February 29, 2016 10:36 PM

February 18, 2016

Liam Proven

Unix: the new legacy platform [tech blog post, by me]

Today, Linux is Unix. And Linux is a traditional, old-fashioned, native-binary, honking great monolithic lump of code in a primitive, unsafe, 1970s language.

The sad truth is this:

Unix is not going to evolve any more. It hasn't evolved much in 30 years. It's just being refined: the bugs are gradually getting caught, but no big changes have happened since the 1980s.

Dr Andy Tanenbaum was right in 1991. Linux is obsolete.

Many old projects had a version numbering scheme like, e.g., SunOS:

Release 1.0, r2, r3, r4...

Then a big rewrite: Version 2! Solaris! (AKA SunOS 5)

Then Solaris 2, 3, 4, 5... now we're on 11 and counting.

Windows reset after v3, with NT. Java did the reverse after 1.4: Java 1.5 was "Java 5". Looks more mature, right? Right?

Well, Unix dates from between 1970 and the rewrite in C in 1972. Motto: "Everything's a file."

Unix 2.0 happened way back in the 1980s and was released in 1991: Plan 9 from Bell Labs.

It was Unix, but with even more things turned into files. Integrated networking, distributed processes and more.

The world ignored it.

Plan 9 2.0 was Inferno: it went truly platform-neutral. C was replaced by Limbo, type-safe, compiling code down to binaries that ran on Dis, a universal VM. Sort of like Java, but better and reaching right down into the kernel.

The world ignored that, too.

Then came the idea of microkernels. They've been tried lots of times, but people seized on the idea of early versions that had problems -- Mach 1 and Mach 2 -- and failed projects such as the GNU HURD.

They ignore successful versions:
* Mach 3 as used in Mac OS X and iOS
* DEC OSF/1, later called DEC Tru64 Unix, also based on Mach
* QNX, a proprietary true-microkernel OS used widely around the world since the 1980s, now in Blackberry 10 but also in hundreds of millions of embedded devices.

All are proper solid commercial successes.

Now, there's Minix 3, a FOSS microkernel with the NetBSD userland on top.

But Linux is too established.

Yes, NextBSD is a very interesting project. But basically, it's just fitting Apple userland services onto FreeBSD.

So, yes, interesting, but FreeBSD is a sideline. Linux is the real focus of attention. FreeBSD jails are over a decade old, but look at the fuss the world is making about Docker.

There is now too much legacy around Unix -- and especially Linux -- for any other Unix to get much traction.

We've had Unix 2.0, then Unix 2.1, then a different, less radical, more conservative kind of Unix 2.0 in the form of microkernels. Simpler, cleaner, more modular, more reliable.

And everyone ignored it.

So we're stuck with the old one, and it won't go away until something totally different comes along to replace it altogether.

February 18, 2016 12:59 PM

February 14, 2016

Denesh Bhabuta

Unending Love

Snow covered Helsinki, FI               Sunday, 14 February 2016               2.10am EET

I seem to have loved you in numberless forms, numberless times…
In life after life, in age after age, forever.
My spellbound heart has made and remade the necklace of songs,
That you take as a gift, wear round your neck in your many forms,
In life after life, in age after age, forever.

Whenever I hear old chronicles of love, its age-old pain,
Its ancient tale of being apart or together.
As I stare on and on into the past, in the end you emerge,
Clad in the light of a pole-star piercing the darkness of time:
You become an image of what is remembered forever.

You and I have floated here on the stream that brings from the fount.
At the heart of time, love of one for another.
We have played along side millions of lovers, shared in the same
Shy sweetness of meeting, the same distressful tears of farewell-
Old love but in shapes that renew and renew forever.

Today it is heaped at your feet, it has found its end in you
The love of all man’s days both past and forever:
Universal joy, universal sorrow, universal life.
The memories of all loves merging with this one love of ours –
And the songs of every poet past and forever.

– With thanks to Rabindranath Tagore

by Admin at February 14, 2016 12:19 AM

February 08, 2016

Steve Kennedy

Speed-up your headless Mac Mini

The Mac Mini is Apple's smallest Mac and though it can be used as a workstation, it's often used as a server for offices/workgroups and even in datacentres. Apple even supplies software to make it function as a server (unsurprisingly called OS X Server - currently v5.0.15 is the release version and the beta variety v5.1 beta 2).

The server software supports various functions including a mail server and even remote Xcode compilations. However sometimes it's useful to remotely access the Mac Mini using Apple's remote desktop so getting a virtual screen on to the unit itself. Unfortunately if it's in headless mode, the on-board GPU is not enabled and all graphics is handled by the main CPU, which can make the system seems extremely slow as the CPU is spending it's time rendering the screen, animations and doing screen refreshes etc.

Now there is a solution to this and Newertechnology have produced an HDMI Headless Video Accelerator (t's about the same size as a small Bluetooth or WiFi adapter) that is plugged into the HDMI port and then the Mac Mini then thinks a screen is attached and thus the GPU is enabled meaning all screen handling is done by the GPU rather than the host CPU and everything runs smoothly again.

The adapter supports a maximum resolution of 1080p (and up to 3840 x 2160 on a late 2014 model). Other models supported are Mid 2010 through to the latest. OS X 10.6.8 is the earliest version of the operating supported (no drivers are required).

It can be found on-line for around £21.99. A really useful little edition if using a Mac Mini in headless mode and accessing it remotely (it's also true for using it for remote animation and anything that uses the GPU).

by Steve Karmeinsky (noreply@blogger.com) at February 08, 2016 04:39 PM

February 05, 2016

Liam Proven

Why do Macs have "logic boards" while PCs have "motherboards"?

Since it looks like my FB comment is about to get censored, I thought I'd repost it...

-----

Gods, you are such a bunch of newbies! Only one comment out of 20 knows the actual answer.

History lesson. Sit down and shaddup, ya dumb punks.

Early microcomputers did not have a single PCB with all the components on it. They were on separate cards, and all connected together via a bus. This was called a backplane and there were 2 types: active and passive. It didn't do anything except interconnect other components.

Then, with increasing integration, a main board with the main controller logic on it became common, but this had slots on it for other components that were too expensive to include. The pioneer was the Apple II, known affectionately as the Apple ][. The main board had the processor, RAM and glue logic. Cards provided facilities such as printer ports, an 80 column display, a disk controller and so on.

But unlike the older S100 bus and similar machines, these boards did nothing without the main board. So they were called daughter boards, and the one they plugged into was the motherboard.

Then came the Mac. This had no slots so there could be no daughterboards. Nothing plugged into it, not even RAM -- it accepted no expansions at all; therefore it made no sense to call it a motherboard.

It was not the only PCB in the computer, though. The original Mac, remember, had a 9" mono CRT built in. An analogue display, it needed analogue electronics to control it. These were on the Analog Board (because Americans can't spell.)

The board with the digital electronics on it -- the bits that did the computing, in other words the logic -- was the Logic Board.

2 main boards, not one. But neither was primary, neither had other subboards. So, logic board and analog board.

And it's stuck. There are no expansion slots on any modern Mac. They're all logic boards, *not* motherboards because they have no children.

https://www.ifixit.com/Teardown/Macintosh+128K+Teardown/21422

February 05, 2016 05:51 PM

Andy Smith (strugglers.net)

Your Debian netboot suddenly can’t do Ext4?

If, like me, you’ve just done a Debian netboot install over PXE and discovered that the partitioner suddenly seems to have no option for Ext4 filesystem (leaving only btrfs and XFS), despite the fact that it worked fine a couple of weeks ago, do not be alarmed. You aren’t losing your mind. It seems to be a bug.

As the comment says, downloading netboot.tar.gz version 20150422+deb8u3 fixes it. You can find your version in the debian-installer/amd64/boot-screens/f1.txt file. I was previously using 20150422+deb8u1 and the commenter was using 20150422+deb8u2.

Looking at the dates on the files I’m guessing this broke on 23rd January 2016. There was a Debian point release around then, so possibly you are supposed to download a new netboot.tar.gz with each one – not sure. Although if this is the case it would still be nice to know you’re doing something wrong as opposed to having the installer appear to proceed normally except for denying the existence of any filesystems except XFS and btrfs.

Oh and don’t forget to restart your TFTP daemon. tftpd-hpa at least seems to cache things (or maybe hold the tftp directory open, as I had just moved the old directory out of the way), so I was left even more confused when it still seemed to be serving 20150422+deb8u1.

by Andy at February 05, 2016 09:50 AM

February 04, 2016

Alex Bligh

Nominet – Sir Michael Lyons’ Review

A year or so ago, Nominet‘s board commissioned Sir Michael Lyons to perform an independent external review of Nominet’s operating model and governance arrangements. Sir Michael reported to the board in October, and the board have now released both Sir Michael’s report, and their response.

I spoke to Sir Michael briefly at the last AGM, and at his request had a longer telephone conversation. I have to say I was impressed – he seemed to have got to the heart of the issues pretty quickly. And he appears to have produced a very sensible report.

Sir Michael’s recommendations are to be found at the end of his report. In summary they are level-headed and reasonable. None are particularly radical, and I think he is probably correct that radical surgery is not needed. Nominet appear to have accepted most of them, although I do find it a little strange that they don’t accept the need for finance director on the board of a company Nominet’s size.

Given I agree with almost everything Sir Michael has written, I’m not going to pick the report apart in full here. But I will mention two details.

KPIs

Firstly, Sir Michael suggests (page 19):

Introduce clear KPIs for cost control and return on Research & Development

I think introducing KPIs is a valuable strategy, particularly regarding cost control. However, I question the extent of the usefulness of introducing KPIs for ‘return on Research and Development’. Long term R&D produces returns only over the long term; by then it’s too late to control the cost of that R&D. I think maintaining a close eye on what is researched is probably even more important than what it costs. Moreover, in any R&D environment it should be an expected result that whilst some projects will produce transformative commercial successes, some (perhaps most) projects do not come to commercial fruition; accepting that this is an inevitability and not a failure is vital, not least as otherwise staff have the perverse incentive to carry on with such projects.

In his recommendations section, this has been tempered to:

Recommendation 18: Nominet should make public the KPIs by which it holds the executive to account reflecting at the minimum registry costs and progress with diversification

which I believe is a better view of things.

The Nominet Trust and Nominet’s public purpose

Sir Michael makes a number of wise remarks about the Nominet Trust and Nominet’s public purpose. Here are a couple of quotes:

From page 9:

It is not enough to argue that Nominet fulfils its wider public purposes by making a profit or that the bigger that profit, the bigger the social benefit. Nor, for that matter, that it meets its social responsibilities by donating some, or all, of its profit to charitable purposes.

From page 12:

However, there is one point that I would like to underline and that is the importance of taking a wide view of social benefit and so avoid focusing solely on welfare benefits. There may be a danger that this has marked the early days of the Nominet Trust, where the board appears to have put an emphasis on separation and independence for the new Trust (both important issues for charitable status to be secured) but, perhaps, inadequate consideration of purpose.
 
Much of what the Trust has undertaken appears to be valued by the beneficiaries and other commentators but does not appear to be widely understood, or valued, by the membership. In part, this may be remedied by clearer communications in the future, and that is certainly on the agenda, but I believe it may also offer some lessons for the definition of the company’s wider purposes. Lessons, in terms of both the importance of clearly-defined purposes but also of ensuring that they are based on a wide view of social benefit. Most crucially, the interest of the original founders in establishing Nominet as a company capable of contributing to the further development of the internet was, itself, a clear purpose of social benefit. Whilst I believe that objective now needs to be revisited and, perhaps, broken down with a set of purposes reflecting the company’s current understanding of the internet and the wider digital economy, I strongly encourage the board to give weight to objectives which offer economic as well as social benefits. Not least, because these are likely to be more appealing to the membership.

I think Sir Michael has these points exactly correct, though as they did not find their way into a recommendation, they board did not respond to them. Donating money to the Nominet Trust is laudable, but does not mean that by doing so Nominet has automatically achieved its public purpose solely by doing this; public purpose should run through its operations. Similarly, Nominet sometimes appears to want to wash its hands of the money once donated (perhaps in order to ensure the Nominet Trust appears to be independent); whilst I agree that Nominet should not involve itself in day to day decisions of the Nominet Trust it should ensure that the Nominet Trust is applying its funds in a manner consistent with Nominet’s own public purpose. Funding more (charitable) projects directly related to internet infrastructure, for instance, would not go amiss.

by Alex Bligh at February 04, 2016 05:57 PM

February 01, 2016

Aled Treharne (ThinkSIP)

Hurting your partners the Basecamp way

I was recently pointed towards this thread on github which I read with growing horror as the thread developed.

Now, fair disclosure – I’ve never really liked Basecamp. In the words of an old friend, they appear to have confused “simple” with “simplistic” and released a product that left me in the position where I always wished it did more than it did. As a result, I’ve only ever used it when customers require it for their projects.

However, the discussion on that thread isn’t related to the product per se, but rather the release of the new version of Basecamp. As developers occasionally need to do from time to time, they’ve rewritten the product. This does occasionally happen when you reach a point where design decisions were made on assumptions that are no longer true – often due to growth. Fact of life. No problems here.

The problem comes in how Basecamp have approached their partners – Basecamp is at its heart an end-user-focused application and some of the principles outlined in the book that they released back in 2006 hold well with that ethos. The problem they have is that along the way, they have taken a product decision to allow integration with third party apps via their API – so when they released v3 of Basecamp and focused around their users, they left their partners in the dark.

My biggest problem with this whole situation is that Basecamp could easily have avoided this if they’d clearly communicated with their partners – a group of people who Basecamp have to engage with in a singularly different way to their end users. Had they said, back in November, “hey, look, v3 is coming out but because it’s a rewrite the API will be different and not backwards compatible, so you need to tell your users that” then I’m sure the partners would have been unhappy but could do something about it. Even identifying which version a user was using based on the old API would have been useful so that at least the third party app could pop up an alert.

Instead Basecamp have strung their partners along for the ride for several months, all the time promising an API “real soon, now!”. As a result, end users who felt, like me, that the product needed extras and used third parties who used the API to implement those extras are now stuck between a rock and a hard place. Companies who wrote integrations are facing a real problem, especially if they’re small shops whose business models rely on this integration.

Maybe I spend too much of my time in a world where APIs, resilience and reliability are “table stakes”. Maybe I’ve been spoiled by companies who place the importance of integration front and centre of their product strategy. Maybe I’m too used to companies who understand how their users use and perceive their products and are willing to communicate clearly to those “users”, whether they’re end-users or third party integrators.

Maybe, once again, I’m just expecting too much from Basecamp.

by Aled Treharne at February 01, 2016 02:08 PM

January 30, 2016

Liam Proven

Fallen giants - comparing the '80s second-generation home computers

A friend of mine who is a Commodore enthusiast commented that if the company had handled it better, the Amiga would have killed the Apple Mac off.

But I wonder. I mean, the $10K Lisa ('83) and the $2.5K Mac ('84) may only have been a year or two before the $1.3K Amiga 1000 ('85), but in those years, chip prices were plummeting -- maybe rapidly enough to account for the discrepancy.

The 256kB Amiga 1000 was half the price of the original 128kB Mac a year earlier.

Could Tramiel's Commodore have sold Macs at a profit for much less? I'm not sure. Later, yes, but then, Mac prices fell, and anyway, Apple has long been a premium-products-only sort of company. But the R&D process behind the Lisa & the Mac was long, complex & expensive. (Yes, true, it was behind the Amiga chipset, too, but less so on the OS -- the original CAOS got axed, remember. The TRIPOS thing was a last-minute stand-in, as was Arthur/RISC OS on the Acorn Archimedes.)

The existence of the Amiga also pushed development of the Mac II, the first colour model. (Although I think it probably more directly prompted the Apple ][GS.)

It's much easier to copy something that someone else has already done. Without the precedent of the Lisa, the Mac would have been a much more limited 8-bit machine with a 6809. Without the precedent of the Mac, the Amiga would have been a games console.


I think the contrast between the Atari ST and the Sinclair QL, in terms of business decisions, product focus and so on, is more instructive.
The QL could have been one of the imporant 2nd-generation home computers. It was launched a couple of weeks before the Mac.
But Sinclair went too far with its hallmark cost-cutting on the project, and the launch date was too ambitious. The result was a 16-bit machine that was barely more capable than an 8-bit one from the previous generation. Most of the later 8-bit machines had better graphics and sound; some (Memotech, Elan Enterprise) as much RAM, and some (e.g. the SAM Coupé) also supported built-in mass storage.
But Sinclair's OS, QDOS, was impressive. An excellent BASIC, front & centre like an 8-bit machine, but also full multitasking, modularity so it readily handled new peripherals -- but no GUI by default.
The Mac, similarly RAM deprived and with even poorer graphics, blew it away. Also, with the Lisa and the Mac, Apple had spotted that the future lay in GUIs, which Sinclair had missed -- the QL didn't get its "pointer environment" until later, and when it did, it was primitive-looking. Even the modern version is:



Atari, entering the game a year or so later, had a much better idea where to spend the money. The ST was an excellent demonstration of cost-cutting. Unlike the bespoke custom chipsets of the Mac and the Amiga, or Sinclair's manic focus on cheapness, Atari took off-the-shelf hardware and off-the-shelf software and assembled something that was good enough. A decent GUI, an OS that worked well in 512kB, graphics and sound that were good enough. Marginally faster CPU than an Amiga, and a floppy format interchangeable with PCs.
Yes, the Amiga was a better machine in almost every way, but the ST was good enough, and at first, significantly cheaper. Commodore had to cost-trim the Amiga to match, and the first result, the Amiga 500, was a good games machine but too compromised for much else.

The QL was built down to a price, and suffered for it. Later replacement motherboards and third-party clones such as the Thor fixed much of this, but it was no match for the GUI-based machines.

The Mac was in some ways a sort of cut-down Lisa, trying to get that ten-thousand-dollar machine down to a more affordable quarter of the price. Sadly, this meant losing the hard disk and the innovative multitasking OS, which were added back later in compromised form -- the latter cursed the classic MacOS until it was replaced with Mac OS X at the turn of the century.

The Amiga was a no-compromise games machine, later cleverly shoehorned into the role of a very capable multimedia GUI coomputer.

The ST was also built down to a price, but learned from the lessons of the Mac. Its spec wasn't as good as the Amiga, its OS wasn't as elegant as the Mac, but it was good enough.

The result was that games developers aimed at both, limiting the quality of Amiga games to the capabilities of the ST. The Amiga wasn't differentiated enough -- yes, Commodore did high-end three-box versions, but the basic machines remained too low-spec. The third-generation Amiga 1200 had a faster 68020 chip which the OS didn't really utilise, it had provision for a built-in hard disk which was an optional extra. AmigaOS was a pain to use with only floppies, like the Mac -- whereas the ST's ROM-based OS was fairly usable with a single drive. A dual-floppy-drive Amiga was the minimum usable spec, really, and it benefited hugely from a hard disk -- but Commodore didn't fit one.

The ST killed the Amiga, in effect. By providing an experience that was nearly as good in the important, visible ways, Commodore had to price-cut the Amiga to keep it competitive, hobbling the lower-end models. And as games were written to be portable between them both without too much work, they mostly didn't exploit the Amiga's superior abilities.

Acorn went its own way with the Archimedes -- it shared almost no apps or games with the mainstream machines, and while its OS is still around, it hasn't kept up with the times and is mainly a curiosity. Acorn kept its machines a bit higher-end, having affordable three-box models with hard disks right from the start, and focused on the educational niche where it was strong.

But Acorn's decision to go its own way was entirely vindicated -- its ARM chip is now the world's best-selling CPU. Both Microsoft and Apple OSes run on ARMs now. In a way, it won.

The poor Sinclair QL, of course, failed in the market and Amstrad killed it off when it was still young. But even so, it inspired a whole line of successors -- the CST Thor, the ICL One-Per-Desk (AKA Merlin Tonto, AKA Telecom Australia ComputerPhone), the Qubbesoft Aurora replacement main board and later the Q40 and Q60 QL-compatible PC-style motherboards. It had the first ever multitasking OS for a home computer, QDOS, which evolved into SMSQ/e and moved over to the ST platform instead. It's now open source, too.

And Linus Torvalds owned a QL, giving him a taste for multitasking so that he wrote his own multitasking OS when he got a PC. That, of course, was Linux.

The Amiga OS is still limping along, now running on a CPU line -- PowerPC -- that is also all but dead. The open-source version, AROS, is working on an ARM port, which might make it slightly more relevant, but it's hard to see a future or purpose for the two PowerPC versions, MorphOS and AmigaOS 4.

The ST OS also evolved, into a rich multitasking app environment for PCs and Macs (MagiC) and into a rich multitasking FOSS version, AFROS, running on an emulator on the PC, Aranym. A great and very clever little project but which went nowhere, as did PC GEM, sadly.

All of these clever OSes -- AROS, AFROS, QDOS AKA SMSQ/E. All went FOSS too late and are forgotten. Me, I'd love Raspberry Pi versions of any and all of them to play with!

In its final death throes, a flailing Atari even embraced the Transputer. The Atari ABAQ could run Parhelion's HELIOS, another interesting long-dead OS. Acorn's machines ran one of the most amazing OSes I've ever seen, TAOS, which nearly became the next-generation Amiga OS. That could have shaken up the industry -- it was truly radical.

And in a funny little side-note, the next next-gen Amiga OS after TAOS was to be QNX. It didn't happen, but QNX added a GUI and rich multimedia support to its embedded microkernel OS for the deal. That OS is now what powers my Blackberry Passport smartphone. Blackberry 10 is now all but dead -- Blackberry has conceded the inevitable and gone Android -- but BB10 is a beautiful piece of work, way better than its rivals.

But all the successful machines that sold well? The ST and Amiga lines are effectively dead. The Motorola 68K processor line they used is all but dead, too. So is its successor, PowerPC.

So it's the two niche machines that left the real legacy. In a way, Sinclair Research did have the right idea after all -- but prematurely. It thought that the justification for 16-bit home/business computers was multitasking. In the end, it was, but only in the later 32-bit era: the defining characteristic of the 16-bit era was bringing the GUI to the masses. True robust multitasking for all followed later. Sinclair picked the wrong feature to emphasise -- even though the QL post-dated the Apple Lisa, so the writing was there on the wall for all to see.

But in the end, the QL inspired Linux and the Archimedes gave us the ARM chip, the most successful RISC chip ever and the one that could still conceivably drive the last great CISC architecture, x86, into extinction.

Funny how things turn out.

January 30, 2016 06:37 PM

January 25, 2016

Steve Kennedy

Techstars London opens applications for next cohort

Techstars has opened applications for their 5th London program which will run from June 20th with the demo day taking place in September.

They will be accepting 10 to 12 teams and interested companies should apply on-line through F6s.

Techstars has some great mentors (there may be some bias here) and some great companies have come out of the program. It's progressed a lot since Springboard days.

by Steve Karmeinsky (noreply@blogger.com) at January 25, 2016 09:31 PM

The Gadget Show Live show returns to the NEC

The Gadget Show Live once again returns to the NEC in Birmingham on 31st March to the 3rd April 2016.

Channel 5 Gadget Show presenters Jason Bradbury, Jon Bentley, Ortis Deeley and Amy Williams will be on the stage and this year a TV episode will be filmed giving members of the public a chance to appear on the show on TV.

There will be 5 areas (including the main stage): -

  • Better Life - Products that can help people or are beautiful in the home
  • Power Up - technology to power their lives which is anything from wearables, fitness devices and in-car kit
  • The Lab - Future/inspirational tech
  • The Arcade - which is all about gaming

Tickets are available on-line and cost

Child (Thurs) £9.99
Adult (Thurs) £16.99
Child (Fri, Sat, Sun) £11.99
Adult (Friday, Sat, Sun) £18.99

by Steve Karmeinsky (noreply@blogger.com) at January 25, 2016 08:41 PM

January 23, 2016

Alex Smith

Preventing hotlinking using CloudFront WAF and Referer Checking

When you run sites with shareable content, hotlinking becomes a common problem. There are several ways to address this – such as validating the Referer header in your webserver and either issuing a redirect or returning a 403 Forbidden.

If you’re also using a CDN, this becomes less practical, as the CDN stores a copy of your content near the edge, even if your webserver validates the original request’s headers, further requests for that content need to be validated by the CDN itself.

CloudFront now supports this using the WAF Feature – and its ability to match and filter based on Headers; this post will cover how to prevent hotlinking using this feature.

Implementation

Terms

Firstly, to cover off various terms. WAF configurations consist of an ACL, which is associated to a given CloudFront distribution. Each ACL is a collection of one or more rules, and each rule can have one or more match conditions. Match conditions are made up of one or more filters, which inspect the request (e.g. Headers, URI) to match for certain conditions.

WAF Setup

The setup is actually pretty straightforward. In this case for simplicity we’re assuming that the images/static files/etc are separated on to another subdomain, so we only need to validate the header. We create a WAF ruleset containing a single rule, with a single match condition, made up of a single filters. That match condition looks at the ‘Referer’ (sic) header and verifies it ends with one or more values. If the rule is matched, the traffic is allowed. Otherwise, the traffic is blocked by the default rule on the WAF. Below, I’ve covered off how to do this via the console for ease, but at some point I’ll update this to include using the CLI.

Step 0: Determine what needs to be protected. In this case, we’re going to block hotlinking for any files under static.alexjs.eu that doesn’t have a Referer ending with alexjs.eu.

Step 1: Create a new Web ACL
Create ACL

Step 2: Create a new String Match Condition with a filter matching on Referer. This will match for anything ending ‘alexjs.eu’, to allow us to hot link from other sites under our own domain. If you need to be more secure (e.g. prevent someone registering stealfromalexjs.eu and using that to hotlink your content), you can have additional match conditions for only valid Refers using ‘Exactly matches’.
stringcheck-ends

Step 3: Create a new rule, and add the specified String Match Condition. Once created, set this new rule to ‘Allow’, and the Default Action to ‘Block’. If you want to test this only, you can set the rule action to ‘Count’ and the default rule to ‘Allow’.

newrule

Step 4: Associate this to the relevant CloudFront distribution, and test.

The Result

Now when we request files without the relevant header, they’re blocked at the CDN, and valid requests are still allowed through.


» curl -I static.alexjs.eu/favicon.ico
HTTP/1.1 403 Forbidden

» curl -H "Referer: alexjs.eu" -I static.alexjs.eu/favicon.ico
HTTP/1.1 200 OK

Next Steps

As mentioned, this only works for cases where content is under a separate (sub-)domain. This is due to the AWS WAF not currently being able to match negatives, however this still allows you to protect your content without the need for application modification. In cases where you can modify the application easily, there are other ways to protect your content, such as Signed URLs.

by alexjs at January 23, 2016 07:08 AM

January 15, 2016

Steve Kennedy

Ofcom publishes regulations for 'TV whitespace' tech

Ofcom, the Super Regulator, (in December) published the new regulations for TV Whitespace technology which came into force on the 31st December 2015 allowing equipment that meets the regulations to operate on a license exempt basis.

In the new digital era of terrestrial TV, there are digital multiplexes across the UK, these multiplexes use different channels, so neighbouring transmitters don't interfere with each other, which means there is a lot of potentially unused spectrum in a particular area. Multiplex sit in the UHF band which covers 470 - 790 MHz.

In order to avoid interference with existing (licensed) spectrum users, devices will need to communicate with databases which apply rules, set by Ofcom, to put limits on the power levels and frequencies at which devices can operate. There is also a 'kill switch' function whereby the database can tell a device to stop operating completely if interference is found to be occurring.

The UHF TV band is currently allocated for use by Digital Terrestrial Television (DTT) broadcasting and Programme Making and Special Events (PMSE). Currently, Freeview TV channels are broadcast using up to ten multiplexes. Each multiplex requires an 8 MHz channel. Multiplexes are transmitted at different frequency channels across the country in the frequency range 470 to 790MHz.

Whilst a total of 32 channels each 8 MHz wide are reserved for DTT in the UK, normally only one channel per multiplex is used at any given location. In other words, the majority of channels are unused for DTT transmission at any given location. This is required because high-power TV broadcasts using the same frequency need geographic separation between their coverage areas to avoid interference.

The channels that are not used by DTT at any given location can be used by lower- power devices on an opportunistic basis. This opportunistic access to interleaved spectrum is not new. Programme making and special events (PMSE) equipment such as radio microphones and audio devices have been exploiting the interleaved spectrum for a number of years, and Ofcom issues more than 50,000 assignments annually for this type of use.

Ofcom refer to the spectrum that is left over by DTT (including local TV) and PMSE use as TV White Spaces (TVWS). By this we mean the combination of locations and frequencies in the UHF TV band that can be used by new users, operating in accordance with technical parameters that ensure that there is a low probability of harmful interference to DTT reception, PMSE usage or services above and below the band.

The following organisations have signed contracts and completed qualification to run the white space databases (WSDB): -

The 'master' devices that talk to the databases should report their height, if they don't the database will use a use conservative default values for the purpose of calculation of operational parameters i.e. it will use height values that would result in operational parameters that are equal or more restrictive than they would be had the device reported its height.

Though the regulations do not specify an update time (for master devices to communicate to the databases), Ofcom has stated a maximum time of 15 minutes which strikes an appropriate balance between the need to be able to act quickly in the event of interference and limiting the practical burden on databases of maintaining frequent communications with potentially large numbers of devices. This may be revised if found to be unsuitable.

The WSD Regulations apply to the United Kingdom and the Isle of Man. They do not extend to the Channel Islands.

A master device is a device which is capable of communicating with and obtaining operational parameters from a database for the purpose of transmitting within the frequency band 470 MHz to 790 MHz.

A slave device is a device which is capable of transmitting within the frequency band 470 MHz to 790 MHz after receiving slave operational parameters from a master device.

Type A equipment as equipment which has an integral antenna, a dedicated antenna or an external antenna11 and is intended for fixed location use only.

Type B equipment as equipment which has a dedicated antenna or an integral antenna and is not intended for fixed location use.

WSDs must not be used airborne.

WSDs must be configured in such a way that a user is unable to input, reconfigure or alter any technical or operational settings or features of a device in a way which (i) would alter the technical characteristics of the device which are communicated to a database (this includes the master and slave device characteristics), or (ii) would cause the device to operate other than in accordance with master operational parameters or slave operational parameters, as applicable. An example of (ii) would be the antenna gain. If this parameter is set to be smaller than the actual gain of the antenna, then the device could radiate at a higher level than the limit communicated by the WSDB.

A master device:

  • must be able to determine its location
  • must provide device parameters (defined now as its ‘master device characteristics’) to a database, in order to obtain operational parameters from the database. The device parameters include the location and the technical characteristics of the device listed below. The operational parameters indicate to the device the channels and power levels that it can use, together with other constraints.
  • must only transmit in the UHF TV band after requesting and receiving operational parameters from, and in accordance with, operational parameters provided by a database
  • must apply the simultaneous operation power restriction (described at paragraph 3.23 above), if it operates on more than one DTT channel simultaneously and the master operational parameters indicate that this restriction applies
  • must report back to the database the channels and powers that the WSD intends to use – the channel usage parameters – and operate within those channels and powers.
In addition, where its operational parameters stop being valid, a master device must tell slave devices that are connected to it to stop transmitting and must stop transmitting itself. The operational parameters stop being valid if:
  • a database instructs the master device that the parameters are not valid
  • a master device cannot verify, according to the update procedure, that the operational parameters are valid.

In order to support more WSDBs Ofcom also intend to publish on our website a machine-readable version of that list on a website hosted by Ofcom so that it can be selected by a WSD through a process known as “database discovery”. Ofcom would expect that list to include those database operators which have informed Ofcom that they are ready to start providing services to white space devices.

It is interesting that Sony is moving into this space, which probably means they will start producing equipment that uses white space technology for short range communication, such as say a PS4 to its peripherals.

by Steve Karmeinsky (noreply@blogger.com) at January 15, 2016 04:29 PM

Intel Edison, jack of all trades, but maybe master of none

The Intel Edison is a small system-on-chip (SoC) that measures about 35.5 × 25.0 × 3.9 mm (on its carrier PCB) which has a connector on it allowing it to be plugged into other things (it is possible to get the SoC on just the PCB without the edge connector).

The SoC board can then be plugged on to various boards from Intel, one is a breakout board which exposes various pins and has some USB sockets, there's also an Arduino compatible PCB allowing Arduino shields to be used.

The Edison tries to be everything to everyone, but doesn't always succeed. It actually has two processors inside, a dual thread dual core Atom running at 500MHz and a Quark 32 bit micro-controller running at 100MHz. The Atom runs Yocto Linux and the Quark a Real-time Operating System (RTOS).

It has 1GB of RAM and 4GB of Flash, 802.11 a/b/g/n WiFi and Bluetooth 4.0

There's a total of 40 I/O pins that can be configured to be: -

  • SD card - 1 interface
  • UART - 2 controllers, 1 with full flow control
  • I2C - 2 controllers
  • SPI - 1 controller with 2 chip selects
  • I2S - 1 controller
  • GPIO - 12 with 4 capable of PWM
  • USB 2.0 - 1 OTG controller
  • Clock output - 32 kHz, 19.2 MHz

Intel provide multiple ways of programming the system: -

  • Arduino IDE (v1.6+, no longer requires an Intel specific build)
  • Eclipse supporting: C, C++, and Python
  • Intel XDK supporting: Node.JS and HTML5

There are other environments that also support Edison (in Arduino or direct mode) such as the node.js Johnny-Five system. Node-red can also be installed directly on the Edison and accessed through its web server. Google's Brillo is also an option now.

Running Linux does have benefits if you're into Linux environments as there's lots of packages that can be downloaded for it or indeed built as required.

You'll either love or hate Intel's development environment (XDK).

Integrating Edison into your own projects does give you a lot of flexibility, though the power requirements aren't as low as some other Arduino types (but by the time shields have been added to give the same functionality, power requirements increase with them). In theory it is possible to put the Atom to sleep and have the Quark micro controller do background non CPU intensive tasks and then it can wake the Atom up to do some hard processing or data transfers through WiFi say, but it's not meant to be 'easy' to actually implement.

The basic Edison (just the board) is around £42, on the small breakout board it's about £72 and on the Arduino base it's £96 though on-line pricing varies.

Overall the Edison really does tries to be everything to everyone and it's a pretty powerful computer (well 2), but it may be too generic for lots of things and the variety of programming modes etc can be confusing.

by Steve Karmeinsky (noreply@blogger.com) at January 15, 2016 03:30 PM

January 09, 2016

Liam Proven

Information about the Oberon language & OS, & its descendants AOS & Bluebottle

I recently received an email from a reader -- a rare event in itself -- following my recent Reg article about educational OSes:

http://www.theregister.co.uk/2015/12/02/pi_versus_oberton/

They asked for more info about the OS. So, since there's not a lot of this about, here is some more info about the Oberon programming language, the Oberon operating system written in it, and the modern version, AOS.

The homepage for the FPGA OberonStation went down for a while. Perhaps it was the interest driven by my article. ;-)

It is back up again now, though:

http://oberonstation.x10.mx/

The Wikipedia page is a good first source for info on the 2 Oberons, the OS:

https://en.wikipedia.org/wiki/Oberon_(operating_system)

... and the programming language:

https://en.wikipedia.org/wiki/Oberon_(programming_language)

Prof. Wirth worked at ETH Zurich, which has a microsite about the Oberon project:

http://www.oberon.ethz.ch/

There is a native port for x86 PCs. I have this running under VirtualBox.

http://www.oberon.ethz.ch/archives/systemsarchive/native_new

There's a good overview here:

http://ignorethecode.net/blog/2009/04/22/oberon/

And the Oberon book is online here:

http://www.projectoberon.com/

Development did not stop on the OS after Prof Wirth retired. It continued and became AOS. This has a different type of GUI called a Zooming UI. The AOS zooming UI is called "Bluebottle" and sometimes newer versions of the OS are thus referred to as Bluebottle.

There is a sort of fan page dedicated to AOS here:

http://sage.com.ua/en.shtml?e1l0

January 09, 2016 03:42 PM

January 07, 2016

Liam Proven

The big choice: privacy or convenience [tech blog post, by me]

So a regular long-term member of one of the Ubuntu lists is saying that they don't trust Google to respect their privacy. This from someone who runs Opera 12 (on Ubuntu with Unity) because they had not noticed it had been updated... for three years.

I realise that I could have put this better, but...

As is my wont, I offered one of my favourite quotes:

Scott McNeally, CEO and co-founder of Sun Microsystems, said it best.

He was put on a panel on internet security and privacy, about 20y ago.

Eventually, they asked the silent McNeally to say something.

He replied:

"You have no privacy on the Internet. Get over it."

He was right then and he's right now. It's a public place. It's what it's for. Communication, sharing. Deal with it.

Run current software, follow best-practice guidelines from the like of SwiftOnSecurity on Twitter, but don't be obsessive about it, because it is totally pointless.

You CANNOT keep everything you do private and secure and also use the 21st century's greatest communications tool.

So you choose. Use the Internet, and stop panicking, or get off it and stay off it.

Your choice.

Modern OSes and apps do "phone home" about what you're doing, yes, sure.

This does not make them spyware.

http://www.zdnet.com/article/revealed-the-crucial-detail-that-windows-10-privacy-critics-are-missing/?tag=nl.e539&s_cid=e539&ttag=e539&ftag=TRE17cfd61

You want better software? You want things that are more reliable, more helpful, more informative?

Yes?

Then stop complaining and get on with life.

No? You want something secure, private, that you can trust, that you know will not report anything to anyone?

Then go flash some open-source firmware onto an old Thinkpad and run OpenBSD on it.

There are ways of doing this, but they are hard, they are a lot more work, and you will have a significantly degraded experience with a lot of very handy facilities lost.

That is the price of privacy.

And, listen, I am sorry if this is not what you want to hear, but if you are not technically literate enough to notice that you're running a browser that has been out of date for 3 years, then I think that you are not currently capable of running a really secure environment. I am not being gratuitously rude here! I am merely pointing out facts that others will be too nervous to do.

You cannot run a mass-market OS like Windows 10, Mac OS X or Ubuntu with Unity and have a totally secure private computer.

You can't. End of. It's over. These are not privacy-oriented platforms.

They do exist. Look at OpenBSD. Look at Qubes OS.

But they are hard work and need immense technical skill -- more than I have, for instance, after doing this stuff for a living for nearly 30y. And even then, you get a much poorer experience, like a faster 1980s computer or something.

As it is, after being on my CIX address for 25 years and my Gmail address for 12, all my email goes through Gmail now -- the old address, the Hotmail and Yahoo spamtraps, all of them. I get all my email, contacts and diary, all in one place, on my Mac and on both my Linux laptops and on both my Android and Blackberry smartphones. It's wonderful. Convenient, friendly, powerful, free, cross-platform and based on FOSS and compatible with FOSS tools.

But it means I must trust Google to store everything.

I am willing to pay that price, for such powerful tools for no money.

I am a trained Microsoft Exchange admin. I could do similar with Office 365, but I've used it, and it's less cross-platform, it's less reliable, it's slower, the native client tools are vastly inferior and it costs money.

Nothing much else could do this unless I hosted my own, which I am technically competent to do but would involve a huge amount of work, spending money and still trusting my hosting provider.

You have a simple choice. Power and convenience and ease, or, learning a lot more tech skills and privacy but also inconvenience, loss of flexibility and capability and simplicity.

You run a closed-source commercial browser on what [another poster] correctly points out is the least-private Linux distro that there is.

You have already made the choice.

So please, stop complaining about it. You chose. You are free to change your mind, but if you do, off to OpenBSD you go. Better start learning shell script and building from source.

January 07, 2016 03:18 PM

December 12, 2015

Andy Smith (strugglers.net)

Disabling the default IPMI credentials on a Supermicro server

In an earlier post I mentioned that you should disable the default ADMIN / ADMIN credentials on the IPMI controller. Here’s how.

Install ipmitool

ipmitool is the utility that you will use from the command line of another machine in order to interact with the IPMI controllers on your servers.

# apt-get install ipmitool

List the current users

$ ipmitool -I lanplus -H 192.168.1.22 -U ADMIN -a user list
Password:
ID  Name             Callin  Link Auth  IPMI Msg   Channel Priv Limit
2   ADMIN            false   false      true       ADMINISTRATOR

Here you are specifying the IP address of the server’s IPMI controller. ADMIN is the IPMI user name you will use to log in, and it’s prompting you for the password which is also ADMIN by default.

Add a new user

You should add a new user with a name other than ADMIN.

I suppose it would be safe to just change the password of the existing ADMIN user, but there is no need to have it named that, so you may as well pick a new name.

$ ipmitool -I lanplus -H 192.168.1.22 -U ADMIN -a user set name 3 somename
Password:
$ ipmitool -I lanplus -H 192.168.1.22 -U ADMIN -a user set password 3
Password:
Password for user 3:
Password for user 3:
$ ipmitool -I lanplus -H 192.168.1.22 -U ADMIN -a channel setaccess 1 3 link=on ipmi=on callin=on privilege=4
Password:
$ ipmitool -I lanplus -H 192.168.1.22 -U ADMIN -a user enable 3
Password:

From this point on you can switch to using the new user instead.

$ ipmitool -I lanplus -H 192.168.1.22 -U somename -a user list
Password:
ID  Name             Callin  Link Auth  IPMI Msg   Channel Priv Limit
2   ADMIN            false   false      true       ADMINISTRATOR
3   somename         true    true       true       ADMINISTRATOR

Disable ADMIN user

Before doing this bit you may wish to check that the new user you added works for everything you need it to. Those things might include:

  • ssh to somename@192.168.1.22
  • Log in on web interface at https://192.168.1.22/
  • Various ipmitool commands like querying power status:
    $ ipmitool -I lanplus -H 192.168.1.22 -U somename -a power status
    Password:
    Chassis power is on

If all of that is okay then you can disable ADMIN:

$ ipmitool -I lanplus -H 192.168.1.22 -U somename -a user disable 2
Password:

If you are paranoid (or this is just the first time you’ve done this) you could now check to see that none of the above things now work when you try to use ADMIN / ADMIN.

Specifying the password

I have not done so in these examples but if you get bored of typing the password every time then you could put it in the IPMI_PASSWORD environment variable and use -E instead of -a on the ipmitool command line.

When setting the IPMI_PASSWORD environment variable you probably don’t want it logged in your shell’s history file. Depending on which shell you use there may be different ways to achieve that.

With bash, if you have ignorespace in the HISTCONTROL environment variable then commands prefixed by one or more spaces won’t be logged. Alternatively you could temporarily disable history logging with:

$ set +o history
$ sensitive commend goes here
$ set -o history # re-enable history logging

So anyway…

$ echo $HISTCONTROL
ignoredups:ignorespace
$     export IPMI_PASSWORD=letmein
$ # ^ note the leading spaces here
$ # to prevent the shell logging it
$ ipmitool -I lanplus -H 192.168.1.22 -U somename -E power status
Chassis Power is on

by Andy at December 12, 2015 12:34 AM

December 11, 2015

Alex Bligh

On Nominet’s price rise

Nominet has announced that it is to increase its prices for UK domain names.

The announcement states in essence that prices will rise from a minimum of GBP 2.50 per year per domain (i.e. GBP 5.00 for two years – the same per annum for longer periods) to a minimum of GPB 3.75 per year per domain, which is a 50% price rise (assuming one was previously renewing each two years). Nominet note that the price hasn’t changed since 1999, so this is equivalent by my calculation to a (compound) 6% per year price rise. The cost increase is then potentially reduced by new co-marketing programmes. Note that the one year registration price was already GPB 3.50 per year, but that’s a relatively new introduction; if you were renewing domains this way, the price increase is smaller.

I’ve been asked what I think about this, and specifically I’ve been asked to sign this petition, which (as far as I can tell) is calling for an EGM of the company to vote on the price changes and some form of consultation. I’m against the former, but in favour in principle of the latter (for the reasons set out below), but as such I won’t be signing the petition.

Those hardest hit by the price rise are those maintaining large portfolios of domain names where the domain names fees are a high percentage of their cost base. Most ‘normal’ domain name registrants won’t give two hoots if the price of their domain name increases by GPB 1.25 a year, or even five times that. But those whose business relies on keeping these portfolios in order to speculatively sell a fraction of them, or to attract traffic (and thus ad revenue), are going to be affected significantly. Let’s call this group of people “domainers” (although some don’t like that title). The EGM petition appears to have been started by domainers, and signed by many domainers. In many quarters of the industry, domainers are not a popular group. My personal view is that it’s a legitimate business model (if not one I want to be involved in) provided IPR is not infringed, no consumers are deliberately confused, no animals hurt during filming etc.; but others have different views.

Nominet’s handling of this issue has been a mess. However, so poor has Nominet’s handling of this issue been that it has succeeded in getting several people to sign this petition who normally would have nothing to do with domainers.

Here’s what I think and why (skip to the end for a summary):

  1. Prima facie, Nominet should have the right (somehow) to change its prices. It’s not reasonable to expect a supplier to maintain the same prices ad infinitum. The question is how.

  2. Those who construct business models which rely on a single supplier for a huge percentage of their cost base, where that supplier has the freedom to change its prices, need to educate themselves on business risk. In this instance, they can renew at the old price for up to ten years (until the new prices come into effect), which will clearly have a cash cost. However, this was a risk that should have been evident from the point Nominet began (certainly since 1999). I’m afraid I have no special sympathy here.

  3. All of the justification for this price rise appears to have looked at supply-side issues, i.e. how much it costs Nominet to register a domain. Let’s briefly look into that. As far as I can tell (and as I raised at their last AGM) their average cost and marginal cost per domain appear both to have risen reasonable substantially since I was on the board many years ago. Whilst I accept that there must have been some inflation pressure (e.g. in wages), and the need to maintain an infrastructure handling more load, technology prices have fallen and processes should have been automated. The latter point is why the average wage at Nominet should have (and has) risen; because it should be employing fewer (relatively highly paid) people designing automated systems, not an army of (relatively low paid) administrators doing things manually.

  4. However, despite Nominet’s emphasis on the above, I suspect the real issue is that buried within the accounts are the costs of doing lots of things that do not directly involve .uk registrations. Nominet is attempting to diversify. This might be good, or it might be bad. But Nominet should have been clear as to how much of the increased cost is going towards expenditure in servicing .uk domain names, and how much for other purposes such as increased costs elsewhere (e.g. diversification), or building up reserves (increased revenue without increased costs). Nominet hasn’t published any figures, so we don’t know.

  5. The unexamined side of the equation is demand-side. As far as I can tell, Verisign’s wholesale price is $7.85 per annum (GBP 5.85 per year), and that’s for a thin registry (where the registry provides far fewer services, and the registrar far more). Clearly on this basis Nominet’s prices are and will remain well below market level. It would thus seem that Nominet is providing a fuller product (perhaps a better product) at a far keener price than its main competitor, despite have fewer economies of scale. And it is a product generally loved in its target market (the UK). Why on earth Nominet didn’t use this as the centre-point of its argument, I don’t know.

  6. I don’t think as a general principle price changes should have to be put to a vote of members. This is how things were (for a while) whilst Clause 19A (the ‘Hutty Clause’) was incorporated into Nominet’s articles. I was and remain in favour of its removal. Having members vote on every price change encourages the perception that Nominet is some form of cartel, and fetters the discretion of the directors. It also makes changing prices an unnecessarily difficult business, meaning it is hard for Nominet to respond to changes in the market place (arguably this can’t have been too much of a worry given the number of years without a price change since it was removed). But Nominet is a commercial organisation, not a golf club, and therefore its pricing should be set by its management.

  7. However, there is the question of how the management should set the prices, i.e. what objective are they attempting to achieve? Verisign is a public company, and its directors set prices to maximise profit in the long term. Nominet cannot distribute its profit to shareholders, so how should it set its prices? Should it too maximise profits? For many years the principles were long term cashflow neutrality, long term P&L neutrality, and maintaining a sufficient reserve for legal challenges and market downturn; these were called ‘the Bligh principles’, because (cough) I came up with them, and they seem to have survived a long while, for better or for worse. Prices were then meant to be set to accord with these principles. Some would argue the principles are still relevant, some would argue they have problems (I have a foot in both camps). But the point is that there were transparent principles that everyone knew about, and if they didn’t agree on them, well, they didn’t in general have a better suggestion.

  8. I am of the view that any change to these guiding principles should be carefully and transparently consulted upon; this is not because I’m particularly attached to the principles above, but because deciding which principles drive Nominet’s behaviour is a key matter of governance. Note this is a different matter to a change in prices (following the guiding principles); I’m happy to leave that to management provided they explain how the change better satisfies the guiding principles. If Nominet don’t publish these principles, or an explanation of how a price change better satisfies them, there is no way members can hold them to account. And whilst I recognise members can occasionally be a pain, there is no one else who can hold Nominet’s management to account. Quite apart from that, transparency is in itself a good thing. As is avoiding the appearance of something that might be problematic to the competition authorities.

  9. What appears to have happened now is that there are no guiding principles, or at least none that we know about. The suggestion that prices are set according to cost recovery principles (never particularly felicitously worded) is simultaneously being removed from the terms and conditions. Is the principle now profit maximisation? If so, please come out and say it. Is the principle now ‘whatever the management feel like’? That is not in my view acceptable. But the principles seem to have disappeared. Prices appear to be being set on the basis that ‘Nominet think they should be higher’. If this is not in fact the case, then Nominet has a communications problem.

  10. Lastly, there seems to have been some bizarre criticism of the co-marketing programmes proposed. The objection is that those who register the most domains get the most co-marketing, and that this is unfair. It seems to be me self-evident that those who register the most domains should get the most co-marketing funds, as they are meant to be put towards registering domains. Rather, my problem with them is that the co-marketing funds for the larger registrars are too small. How do I work that out? From the site calling for an EGM: ‘Registrars with over 250,000 domains under management can now claim up to £80,000 per registrar. Smaller registrars with under 5000 domains can only claim £2000.’ This completely misses the point. For an organisation with 250,001 domains, Nominet’s providing GBP 0.32 per domain back, reducing the price for that year to GBP 3.43. For an organisation with 4,999 domain names, Nominet’s providing GBP 1.25 per domain back, reducing the price for that year down to GBP 1.25. Or to put it another way, if you have 4,999 domain names as a registrar, and claim your full co-marketing allowance, you will be far better off than before (even if the number of domain names stays the same); if you have 250,000 domain names, you will be worse off than before, unless you increase the number of domain names you sell quite substantially. Every co-marketing program I’ve seen before scales in the other direction – i.e. the larger you are, the better deal you get per item sold. Rather than a bulk discount, Nominet is applying a bulk penalty! Whilst I am sure it has its reasons for this, I have no idea why smaller registrars are complaining it’s unfair on them. Of course not all registrars may be eligible to apply, but that’s not dependent on the size of the registrar. And the co-marketing is presumably directed at generating new registrations rather than renewals (this is co-marketing, presumably meaning it is dependent on Nominet related marketing spend from the registrar); whilst that may hit those with domain portfolios they are not growing harder, that’s not dependent on size either, and is presumably a desired result (encouraging people to grow the number of domains under management as opposed to merely renew an existing portfolio).

So, back to that petition:

  • Yes, Nominet should have (and should now) consult on any change to its pricing principles, and not change the prices until it has done so; but
  • No, Nominet need not consult (let alone have a vote) on the price change itself

But I think it unsurprising that people are annoyed.

by Alex Bligh at December 11, 2015 09:48 PM