Computer in a plug . . .

From UK PC world.

A new type of PC which is incorporated into a conventional three-point plug is being released in the UK.
The Plug Computer is based on a platform developed by US semiconductor firm, Marvell.

The device squeezes a 1.2GHz processor, 512MB of DRAM, 512MB of NAND Flash memory, plus Ethernet and USB ports into a unit no larger than a plug adaptor.

The headless computer plugs straight into the wall, acting as a tiny, low-powered home server.

More here.

This entry was posted in IT and Internet. Bookmark the permalink.

4 Responses to Computer in a plug . . .

  1. Jacques Chester says:

    Very nifty. I think I saw something similar on Slashdot not long ago.

    Milner also claims the Plug Computer offers far better performance and reliability than public cloud services. “You’ve got a full, dedicated 1.2GHz server that you’re connected to,” he said.

    Oooh, what a fibber. There is no way in hell that a plug-computer relying on resdiential AC is more reliable than a server in a proper data centre.

  2. Tel_ says:

    There is no way in hell that a plug-computer relying on resdiential AC is more reliable than a server in a proper data centre.

    You would think that, but iiNet was out of action for half a day when a datacenter (in Melbourne? too lazy to look it up) lost power on a triple-redundant circuit. I’ve lived just about everywhere round Sydney and I’ve never seen more than a half-hour outage with regular wall power, and that’s seriously rare. I expect that even a bottom of the line UPS could keep little pluggy up for an order of magnitude longer than the 99.9% outlier.

    The problem with power circuits in expensive data centers is that the data-center management are not power experts, and the redundant circuits don’t get put to the test very often. On the other hand, although residential AC does experience more outages, they tend to be short (say 5 minutes) and the people who supply that power are very adept at dealing with them because that’s all they do, every day. Kind of weird and backwards, but logical.

    There’s kind of an economic theory re. engineering — once system X gets a reputation for being very reliable, anyone who knows how to repair system X is seen as redundant and is sacked and/or outsourced. The outsourcing contract makes random promises and generally goes through a disposable front company (should anyone take those promises seriously), plus the outsourcing company faces the same decision-making criteria and we nest into a layer of recursion. The highly reliable system then fails less often, but causes bigger problems when it does fail, and takes longer to repair. The end result is a constant expectation for the percent outage which is determined not by any physical property, but by the willingness of managers and company owners to take risks at various stages down the line.

  3. Jacques Chester says:

    To counter your observation, Troppo’s server relies on single source commercial power via UWA. We’ve had an outage in the past 12 months because of blackout. I also run a VPS on Slicehost in the USA which has been operating with no downtime for 13 months now.

  4. Jacques Chester says:

    I guess it depends on the data centre operators, is what I’m saying.

Comments are closed.