Ovenstående XKCD-stripe spejler mange internetmenneskers hovedrysten over statens og det sikkerhedsindustrielle kompleks' brug af præfikset cyber- om alting på internettet. De er ude af touch. Det er gammeldags. Sådan tænker vel de fleste - men jeg tror man går galt i byen, hvis det er det man tror det handler om.
Cyber- har vi fra cyberspace. For de fleste, er det et ord, der kom fra William Gibson og hans - ja, undskyld - cyberspacetrilogi, med gennembruddet Neuromancer som den vigtigste roman.
Men præfikset er ældre. Kilden til ordet er Norbert Wieners kunstord cybernetics - om den nye videnskab om feedback og kontrol han opfandt under og lige efter 2. verdenskrig. Order kommer fra græsk - kybernetes, som betyder styrmand - eller allerede hos de gamle grækere også metaforisk: Den der styrer. En leder.
Kontrol
Wieners kybernetik - og ordet cyber - handler altså meget præcist om én ting: Kontrol. Vores begreb om information - også affødt af 'the cybernetic turn' efter krigen er på sin vis en biomstændighed ved udviklingen af kontrol.
Kybernetikkens og informationens allerførste usecase var våbenstyringssystemer. Automatiske sigteapparater til hærens og flådens kanoner. Informationen skulle bruges til at etablere kontrol. Informationen var aldrig pointen, kontrollen var.
Og ordet cybers rod har aldrig forladt internettet, og har på intet tidspunkt været væk. At NSA og hele det statslige sikkerhedsapparat betitler alt internet med cyber, handler om at missionen ikke på noget tidspunkt har været væk - ikke om at de sidder fast tilbage i 80erne.
Selv ikke i Gibsons romaner er kontrol og magt på noget tidspunkt væk. Gibsons hovedpersoner er altid i kløerne på - eller agenter for - tågede stærkere magter; sådan lidt ligesom i klassisk græsk drama, hvor Odysseus og alle de andres historier i sidste ende er givet og berammet af guderne. Det er simpelthen urformen på Gibsons plots, fra Neuromancer over Pattern Recognitions til The Peripheral. Uafhængighed er en dyrt vundet luksus for de allerrigeste - og selv den har grænser.
Så grin bare af cyber- det ene og cyber- det andet. The joke's on you.
(Hvis man vil læse mere om informationens historie og computerens og nettets ditto så kan jeg kun varmt anbefale James Gleicks 'The Information' - eller man kan læse George Dysons 'Turings Cathedral', som også er god. Læser man begge vil man opleve en vis deja vu, men de er begge gode og forskellige nok til at være besværet værd)
Om en god times tid relauncher jeg Apollo 11 i din browser, for at fejre det største ingenørprojekt nogensinde foretaget, med atombomben som den eneste mulige undtagelse.
Ideen med projektet er, i disse 140-tegns tider, at give dig en chance for at opleve projektets enorme omfang, f.eks. bare missionens lange varighed. Det tager et par dage at flyve til månen, f.eks.
Når man selv arbejder med at lave nye ting, bare på en meget, meget mindre skala, må man tage hatten af for Apollo-projektets enorme omfang og ambitionsniveau. Da det kostede mest udgjorde Apollo-projektet hovedparten af NASAs budget, der var over 4% af det amerikanske statsbudget. Alt i alt brugte NASA omkring $100mia i nutidsdollars på at komme til månen.
Og de blev vel at mærke brugt meget hurtigt. Fra 1961 og frem mod månelandingen foretog NASA omkring 25 testflyvninger med forskellige delsystemer af måneraketten. Helt frem til april '68 fandt man fejl under tests, der havde slået astronauterne ihjel, hvis ikke man altså havde det omfattende pre-flight testprogram.
Den for mig mest glemte viden om Apollo-programmet, da jeg sidst genopførte månerejsen, var de to rejser ud til månen før Apollo 11. Apollo 8 havde bevist at man overhovedet kunne flyve sikkert ud til månen og hjem igen. Apollo 10 var en decideret generalprøve. Ud til månen, ombord i landingsmodulet, gennemføre en flyvning højt over månen i landingsmodulet - og så retur uden at sætte landeren ned.
Det enorme budget, og de enorme ambitioner er en ting, men det helt store lærestykke for mig i projektet er, hvor tæt på månerejsen, der stadig var store uløste udfordringer, og så hvor forsigtigt, metodisk og kostbart man til syvende og sidst gik til værks. Masser af testflyvninger. Alle dele testet uafhængigt, før de blev testet integreret. Ingen nye landvindinger taget uden en fuld integreret test af skridtene før.
When Gutenberg died the paper book had well over 500 years of life left. When James Watt died, the steam engine had over 150 years of useful life left. A couple of weeks ago Howard Scott, one of the creators of the LP, the long playing record, died - having seen his invention cease into a niche product well before his death. When Bill Gates dies, Windows will be long gone. The lifetime of technologies is dropping dramatically. Where once, inventing something meant a life in eternity, it now just means 'a few years of relevance'. It's becoming harder and harder to leave a mark on society that will outlast you, if that's the kind of thing that motivates you.
News
Just not what you want. Too much 'Made You Look!'. Too much rehashing. Too much process journalism. No consequences. No real world building. No real analysis, just storytelling. I'd like less of what I get and all of these. Also: It's not on my Kindle, so not on my phone and not on my forthcoming Kindle Paperwhite, readable in bed, after my girlfriend turned the light off.
Feeds
It's one thing that not enough sources output them anymore, but the feed reading process is broken. I've recently recovered from feed reading bankruptcy after about 2 weeks of not attending to my feeds. The experience was horrible; too much piling through the same shit, too much piling through shit I've actually already seen on Twitter or Facebook. What I'd like from my feed reader is higher relevance and "scale adjusted relevancy". When I don't have the attention, I want the required attention to scale so I can catch up in one go on 14 days, not have to do 14 days of feed-tending work. My feeds should be a newspaper. A week old feed-surface should be a weekly magazine. A month old feed-surface should be a monthly magazine. A year old feed-surface should be an annual review. None of this should require anything but collaborative work.
Blogs
Well, mine is anyway. What's particularly broken about is it subpar integration with the flows, and the absence of readership after everyone moved to flows.
Shutting down the client competition and innovation. Shutting down feeds and unauthorized access. Shutting off access to the follower graph. This is all bullshit. Twitter has become old media, like Edd Dumbill says, and is as broken as old media.
Are the fixes for all of these connected? Is it one-true-thing? I think I have a dream about what it is. It's blogs again, but this time hooked up to a 'reasonable' aggregator with a mission to act as an infrastructure company. The aggregator connects islands of users to form a distro twitter. The aggregator is uniquely positioned to sell firehose access to the social feed, to the graph, to collaboratively enhanced shared links. It Just Might Work.
I'm imaging this getting built off an open source conversational platform - you're allowed to think Diaspora - which initially thinks of itself as a caching hosted twitter client, but really is intended to allow other hosted islands of friends to connect. I think Diaspora got it wrong by inventing too much, and claiming too much. What I really want is just to leave, without slamming the door shut.
Is Instagram even a tech company? I'm not trying to be flippant here. I find this a real and interesting question. There's been hundreds of succesful social image sharing companies before and there's going to be a hundred more. In that sense, Instagram is just a recent hit in a string of hits.
In fact, Instagram's engineers keep it simple on purpose. They try to invent as little as possible.
The consumer side of the mobile revolution is more a media revolution than a technical revolution. Most of the hits are simply new rides at the amusement park, or fresh hit singles, because we got bored with last years' hits.
Isn't Instagram more the Angry Birds/Rovio of photos than the Apple of photos?
That's not to take away from Instagrams colossal succes, more power to them! It just means that we need to evaluate the succes differently. Instagram didn't disrupt or disintermediate or transform or restructure anything. Instagram entertained and connected a lot of people for some time. That's a different function than a typical tech company. It doesn't generate the same kind of aggregate benefits more and more tech output from a company does. It doesn't produce the kind of grinding deflationary pressure on older technologies, other companies, other kinds of photography, for instance, that we're used to. Sure, Kodak is dead - but didn't Apple do the killing? Was it Instagram? Weren't we sharing visuals at about the same clip before - just on Facebook + Photobucket + Twitpic + Yfrog + Flickr and on and on.
What's your take? Is Instagram transformative - or just this really nice way to share photos right now....
This video, with delicious robo-voice, sounds like viral advertising for some razor-smart near future sci-fi film, but is in fact an a real promovideo for an ad-hijacking server discovered in the wild by Justin Watt during a recent hotel stay. Movies routinely sell the in-film ad space to companies, so it's to be expected that this will happen in a fully mediated reality as well. Still great sci-fi fodder. Our lives online are subjected to unwilling full on transparency, while the transparency of the layers of tech beneath our world of ends degrades. Network neutrality is all about this kind of rewriting. Throttling is just another kind of rewrite.
En slags follow up om det dybere og dybere vand.
Sidste år skrev Marc Andreesen et meget læst essay i Wall Street Journal om hvordan software er ved at æde hele verden Den umiddelbare aflæsning af essayet går på, at Moores Lov og den stadig mere sofistikerede hardware rundt om os til stadig lavere priser gør at softwaresiden af... ja, af hvadsomhelst, fylder mere og mere. Apples revolution af mobiltelefonien går ud på at lykkes med at designe noget hardware, der pludselig gør telefonen til et effektivt og dynamisk apparat til at levere software på. Det handler ikke særlig meget om, at Apples telefon - som telefon - er bedre end hvad Nokia havde før. Så langt så godt.
Men det modsatte pres er ligeså relevant: Jo mere kultur vi gror rundt om os; jo flere processer, jo mere udtryksfuld bliver den kultur. Og efterhånden som udtryksfuldheden går op, sker der et kvalitativt spring: Kulturen bliver software. Wired online har en fascinerende historie om sådan et fænomen - om hvordan den kinesisk-/taiwanesiske IT-maskine med sin tiltagende sofistikation tager livet af de integratorer, HP, Cisco, Juniper, der hidtil har været specialiserede i at samle komponenterne til brugbare produkter. Komponentindkøbsprocessen er simpelthen på vej til at blive så sofistikeret at der ikke er nogen grund til at gå gennem integratoren - man kan lige så godt få leverancen direkte fra fabrikkerne. Leverandørerne bliver mere og mere til software - i stand til at integrere efter behov for brugerne. Integrationen i sig selv er gået fra at være hardware - Ciscos, HPs, Junipers, til at være software i købsprocessen.
Here's an extremely interesting slide deck(pdf) on the opportunities in chip design, if we allow a little more of the physical characteristics of the chips to play a role in the programming interface. Turns out symbolic simulation of floating point math (i.e. real numbers) is extremely compute expensive when you consider that the physics is naturally 'solving' these equations all the time, just by being, you know, real physical entities.
The 'only' cost of a projected 10000x improvement in efficiency is a 1% increase in error rate, but if you change the algorithms to suppose a certain level of error - a natural notion in the realm of learning algorithms and AI-techniques - that's not really a problem at all.
The reason this is important, is, that we're moving from symbolic computing to pattern matching, and pattern matching, machine learning, AI and similar types of computation all happens in the real domain. A 10000x advance from more appropriate software buys about 13 applications of Moore's Law - something like 20 years of hardware development we could leapfrog past.
A few years back I wrote down a guess - completely unhampered by statistics or facts - that in 10-15 years 90%-95% of all computation would be pattern matching - and I stand by that guess, in fact I'd like to strengthen it: I think, asymptotically, all computation in the future will be pattern matching. This also ties into the industrial tendency I was talking about in the previous post. Increasingly, filtering is where the value of computation comes from, and that makes it highly plausible we'll see specialized chips with 10000x optimizations for pattern matching. Would Apple need to ship any of Siri's speech comprehension to the cloud if the iPhone was 10000x more capable?
Postscript: Through odd circumstances I chanced on this link the exact same day I chanced by this report(pdf) on practical robotics. I'll quote a little from the section in that called 'Let The Physics Do The Walking':
Mechanical logic may be utilized far more often in Nature than we would at first like to admit. In fact, mechanical logic may be used for more in our own robots than we realize[...] Explicit software we originally envisioned to be essential was unnecessary.
Genghis [a robot] provides a further lesson of physics in action. One of the main reason he works at all is because he is small. If he was a large robot and put his foot in a crack and then torqued his body over, he would break. Larger walking machines usually get around this problem by carefully scanning the surface and thinking about where to pyt their feet. However, Genghis just scrabbles and makes forward progress more through persistence than any explicit mechanism. He doesn't need to build models of the surface over which he walks and he doesn't think about trying to put his last foot on the same free spot an earlier foot was placed.
Let it be known, that this Morten Lund-blogpost and this 4-year old classy.dk post are talking about the same thing, only differently.
Hvad skal vi leve af i fremtiden? Det er et spørgsmål, der trænger sig levende på her i krisen, hvor jobs forsvinder og de nye fag ikke formår at lægge nye jobs til.
Weekendavisen havde for nogle uger siden, en serie om det, påstås det, skjulte Produktionsdanmark ude vestpå, der hvor alle fabrikkerne ligger. Og der hvor alle arbejdspladserne tabes i disse år. Det er næppe noget tilfælde at artiklerne er skrevet af Jesper Vind Jensen, der vist voksede lidt længere nede af den samme gade i Ribe vi boede på.
Jeg ved ikke om jeg køber præmissen om at det vestlige industri-Danmark lever skjult, men reelt er det at der ryger en masse faglærte og ufaglærte jobs på fabrikkerne i de her år - og at de nyligt arbejdsløse fabriksarbejdere næppe bliver omskolet til symbolanalytikere lige i morgen, med lovende jobudsigter i organisationsudvikling og kommunikation nede på kommunen.
Det er jo noget juks, og som teknolog, en af de, der arbejder i mandskabsfrie, nærvirtuelle virksomheder, er der naturligvis lidt ekstra at tænke over, hvis man ellers godt kan lide det her og også ønsker naboen velstand og et godt liv.
Artiklen fik mig til at tænke på hvad for nogle jobs sådan nogen som jeg selv egentlig er med til at skabe. Jeg har arbejdet 4-5 forskellige steder. 3 af dem så tidlige projekter at der ikke rigtig foregik andet end udvikling, og de to andre "rigtige" virksomheder med omsætning og den slags. Så længe man er i de små udviklingsvirksomheder så producerer man ingen jobs, udover det man selv har. Jobskabelsen starter først når der faktisk er omsætning; så skal man have sælgere, supportere, regnskabsfolk osv.
De steder jeg har arbejdet var development/andre mixet noget i stil med
Ingen af stederne - undtagen i Ascio - har jeg været mere til at producere en fornuftig mængde non-tech jobs. Jacob Bøtter ville i en serie tweets have den type virksomheder, der ingen jobs skaber, til at være en ny økonomisk verdensorden, men pointen er jo lige præcis at de ikke bliver en ny økonomisk verdensorden! Der udbetales formodentlig mere i løn til de supermarkedsansatte på Vesterbrogade end til de IT-startupansatte, uanset at supermarkedet betaler dårligere.
Jeg sad for noget tid siden over en middag og diskuterede hvad teknologi gør ved ens job, og konklusionen er brutal men indlysende. Teknologis hele virkemåde er at slide de ansattes kompetencer ned. Det er ikke fordi teknologien forsvinder med tiden.
Jeg læste engang at Bill Gates på et tidspunkt havde sagt til en amerikansk politiker noget i stil med "hvad angår landets fremtid, så kan I ikke satse på os. Der skal bygges nogle biler. Der skal fabriksgulve til." - underforstået; vi i tech kommer ikke til at skabe arbejdspladser der ikke kræver tech skills. Der skal skabes nogle jobs med at flytte rundt på nogle atomer, der vejer noget, hvis vi skal have fuld beskæftigelse her i landet. Så hvis vi skal tage den påstand alvorligt - hvilken startup skal man så gå ind i næste gang?
On the eve of Jobsos exit, let's have another look at the famous video where Steve Ballmer responds to the iPhone
The interesting thing here actually isn't how ridiculously wrong Ballmer was about the success of the iPhone, but rather what he says about his own product - a recent winphone from Motorola
It's a very capable machine. It'll do music, it'll do internet, it'll do email, it'll do instant messaging[...]
It's a chestnut of interface critique: Embodiment is good, the concrete beats the abstract, nobody reads online. It drives interfaces towards the tangible, and I'll be the first to agree that good physical design (and design that *feels* physical) is pleasurable and restful on the mind.
None of these facts are, however, easy to reconcile with the fact that every day 15% of the queries seen by Google are queries Google has never seen before. Put differently, the information space Google presents to the world grows by 15% every day. Imagine a startup experiencing this kind of uptake. You'd consider yourself very lucky - even if a lot of those 15% will be spelling mistakes etc.
The 15% number sounds staggering, but it's when you compound it a little it becomes truly mindblowing - and in fact hard to believe entirely - 15% daily discovery means that in a month, the entire current history of Google searches fades to about 1% of all queries seen. Obviously this isn't a description of typical use, but it is a description of use, none the less. This is complete rubbish and I'm emberrased to have written it, read on below
Now, try to imagine building a physical interface where all uses it has been put to, since the beginning of time, fade to 1% in a month. That's very hard to do. The thing is, that thinking is different, language is different, information is different. The concrete approach breaks down when confronted with the full power of language.
This is also why we'll always have command lines.
So, above I make a really embarrasing probability calculus 101 error, when I tried to compound the "every day we see 15%" new queries statistic. This isn't a toin coss, but something completely else. Chances are that "every day we see 15% new queries" compounds on a monthly basis to .... 15% new queries. To see why, I'm going to make a contrived draw of numbers that match the "every day we see 15% new queries" statistic.
Let's suppose we wanted to produce a string of numbers, 100 every day, so that we can say that "every day we see 15 numbers we haven't seen before". The easiest way to do that is to just start counting from 1, so the first day we see the numbers 1..100. Of course on the first day we can't match the statistic, since we haven't seen any numbers before.
On the second day however we draw 85 times from the numbers we have already seen - we just run the numbers 1..85 - and for the remaining 15 we continue counting where we left off on day 1, so on day 2 we would have the numbers 1..85,101..115. On day 3 we run 1..85,116..130 and so on.
This way, it's still true that "every day we see 15 numbers we haven't seen before" but at the end of the first month (30 days) you will have seen in total the numbers 100+29*15 = 535 numbers.
In month 2 (let's say that's 30 days also) we change things a little. Instead of just running through 1..85 we continue upwards until we have cycled through all the numbers we saw in month 1. There were 535 of those, so that'll only take 7 days. You'll see 30*15 = 450 new numbers and 535 old ones when doing this or 46% numbers you've never seen before of all the numbers you see in month 2.
In month 3 (still 30 days) we do the same thing as we did in month 2, but this time there are 535+450 old ones, so the 450 new ones only amount to 31% of all the numbers we see in month 3.
We continue like this. The most already seen numbers we have time to run through doing 85 a day for 30 days is 30*85, and we'll still have 30*15 new ones, so lo and behold, when we continue this process we end up seeing 15*30/(15*30+85*30)=15*30/(15+85)*30=15/100=15% numbers we have never seen before.
Once in a while it's worth wondering what profound changes we're in for in the next decade if any. With that in mind, what's going to be the most common prosthetic in 2020 that none of us have today? Phones and smartphones are out of the running - we already all have those. Tablets are almost out of the running - or they would probably be the answer.
Let's exclude them - then what is it going to be? Or is the question wrong - like asking "what will be the most popular programming language in the home in 1990" in 1978? Will evolution be elsewhere? Won't technology be evolving in the prosthetic space at all?
My professional bet is on biohacks, but that might just be a little too science fictiony for a while to come. Other than that a swarm of chips around the phone seems likely to me. iPhone ready jackets and watches and glasses and pockets. 2020 might be too close for that. It might take another 5-10 years.
Working under the assumption that Marc Hedlund's post - on why Wesabe didn't make it as a personal financial aggregator - is accurate, there are a couple of superficial conclusions: At play in success are convenience, utility and perceived utility - which is not the same thing as actual utility.
By Hedlunds reasoning, Mint was way better at perceived utility than actual utility, having a low quality of imported financial data, but was clearly more convenient.
Supposing only these three things are in play, there are two possible conclusions: Utility doesn't matter, only perceived utility does. And the other one: Convenience is essential. You don't need to compete on utility above a basic threshold, you need to change the rules a little, so people care about your product at all, and then you simply win on convenience after that.
When I've been doing stuff with Morten, we've always had a basic dividing line in how we are interested in things, which has been very useful in dividing the work and making sure we covered the idea properly. Morten is intensely interested in use, and I'm intensely interested in capabilities and potentials.
Any good idea needs both kinds of attention, so it's good to work with someone else who can really challenge your own perspective. If only we had a third friend who was intensely interested in making money we could have really made something of it. It's never too late, I suppose.
Anyway, my part of the bargain is capabilities. Yesterday evening, and this morning, I added another year worth of lifetime to my aging* Android phone, one of the original Google G1 development phones.
It's a slow, old device compared to the current android phones. Yesterday, however, by installing Cyanogenmod on the phone, I upgraded to Android 2.2 - Froyo - and boy, that's a lot of capability I just added to the phone**.
First, about the lifetime: Froyo has a JIT, an accelerator if you're not a technical person, which makes it possible for my aging phone to keep up with more demanding applications, expecting better hardware.
Secondly, Froyo is supported by the Processing environment, for experimental programming, so now I can build experimental touch interfaces in minutes, using the convenience of Processing. This makes both Processing and the phone infinitely more useful.
Thirdly, Froyo has a really nice "use your phone as an Access Point"-app for sharing your 3G connection over WiFi***. I had a hacked one for Android 1.6 as well and occasionally this is just a really nice appliance to have in a roomfull of people bereft of internet.
Fourth, considering that Chrome To Phone is just an easy demo of the cloud services behind Froyo it sure feels revolutionary. Can't wait to see people maxing out this particular capability.
Fifth, and it feels silly to have to mention this, but Froyo is the first smartphone HW/SW combo you can add storage in a reasonable way, i.e. moving apps + everything to basically unlimited replaceable storage.
On top of all the conveniences of not being locked down, easy access to the file system, easy backup of text messages and call logs; this feels like a nice edition of Android to plateau on for a while. If the next year or so is more about hardware, new hardware formats, like tablets, and just polishing the experience using all of these new capabilities, I think that'll work out great.
* (1.5 yrs old; yes it sucks that this is 'old'. We need to do better with equipment lifetimes)
** I'm going to but a couple of detailed howto posts on the hacks blog over the next couple of days, so you can do the same thing.
*** For Cyanogenmod, you need this.
A couple of minutes ago, the technician got done installing the new remote read-off functionality on our heating system, here at my coop. This ends the need for annual manual read-offs by the coop board (i.e. me) and estimated usage numbers. Presumably the heating company will also be able to tell us interesting stuff about how we spend our energy dollars from the new masses of data they will have available.
Out of curiousity, I asked the technician how the system communicated with the outside world. The answer was simple and obvious, there's a GSM modem in the remote read-off module.
There are two things that interest me about that: Firstly, the infrastructure wasn't built for this. The application could never pay for suitable infrastructure on its own, but as soon as the infrastructure is already there, this becomes a very hard problem with an extremely easy solution. Infrastructure fixes problems in a completely different way than engineers do. Secondly, embedded computers in Copenhagen have access to about the same level of infrastructure - or better - as entire villages in sub-saharan Africa.
I took a look back at my weblog entries for 2003. "Only 7 years ago", it's easy to think, but frankly, reading them, they feel positively pre-historic. I hardly recognize my language or interests from back then.
Let me take you back to the year 2003 in the context of infrastructure.
Back then we were talking about replacing RSS with Atom, because everybody not Dave Winer were in some kind of argument with Winer. We were debating what to call Atom in the first place. This is infrastructure that has clearly gone to the background as we've moved our stuff into silos. I'm a little sad about this, but the kind of fighting that the RSS/Atom battle is an example of, is why silos sometimes make more sense and simply win. We need our infrastructure back from the silos in the next 1-4 years.
There were tons of upstarts, and huge debate, over public WiFi. Incumbents - both those providing internet services and mobile information - didn't want public WiFi to happen. In general we were all talking about a dreamy future with data everywhere, which clearly hadn't materialised yet. Today the public WiFi plans seem like a quaint, patchy solution to a problem that got a better infrastructure solution later on. Not that the alternate terror-free future with abundant, communal WiFi everywhere wouldn't have been great, but we seem to be making a reasonable go of it in the future we actually got.
This pattern, of "patchy, but possible" local solutions to problems, that we since get a global fix for from infrastructure, is recurring. In 2005-2007 location was one of these. Plazes - location from WiFi - was a good idea that has since become almost completely irrelevant. Infrastructure has cut that problem differently; now the important thing is access to social data - how can I conveniently socialize a particular place - not location data, since location is now an ample resource (whereever WiFi is, anyway)*. We didn't have maps, and they weren't free.
2003 was also the year we first heard about Skype. I wrote about it (in Danish) and got a reply to the effect that "I don't believe in it. I already have a phone", which is a lot like the classic responce Xerox got way back when they were trying to sell photocopiers for the first time: "But I already have a secretary".
The old rationale doesn't become less rational, because of the new technology - it's about all the the new rationales that suddenly make sense. Last I checked, Skype accounted for 12% of all international calling.
Also, in slightly different infrastructure, 2003 was when Google started rolling out AdSense, so we got a first stab at how online media were going to get paid. There was a lot of clearly unfounded optimism about this, and the world has basically moved on, even while totally assimilating to AdSense. Now we're talking about stuff like Flattr instead. Early results are at least interesting.
And to think, that back in 2003 we weren't even dreaming about iPhones, iPads and Kindles. About AppStores. Or Twitter or Facebook or YouTube, for that matter. Sounds to me like we simply missed the places for people. Including markets.
*It's worth noting here, that location technology is actually still a patchy mess of Skyhook WiFi assisted location + GSM cell assisted location + GPS proper, but those are technicalities. The key thing is that the abstraction is in place, and is good enough, that we just believe in it.
I have similar feelings about what we did with Imity back in 2006, btw. Location has taken care of most of that problem. Infrastructure cut it up in a different way, than what we had planned for.
Hvis man gerne vil vide hvorfor offentlige IT-projekter bliver så dyre og dårlige, så kan man starte med at læse her. Det er sund fornuft at have en institution som Datatilsynet, men hvis udgangspunktet er not-invented-her syndrom som f.eks. udtalelsen "Allerede det forhold, at medarbejdernes login sker uden digital signatur, gør, at den er vi ikke med på", som nævnt i artiklen, så umuliggør man en fleksibel adgang til at prøve med ny teknologi i de offentlige IT-systemer. Farvel til de utallige fordele vi andre høster på Facebook, Twitter o.l.. De kommer aldrig i spil i offentlig sammenhæng.
Det bliver et enormt problem, i et land hvor den offentlige sektor er så stor. Hvis vi skal have vækst og velstand i Danmark, så er vi nødt til at finde ud af at bruge IT så godt som muligt, og så hurtigt som muligt, pg så er det en katastrofe at halvdelen af vores økonomi simpelthen har annuleret innovation på den måde.
De der gider lege med på den form for defensivt kravrytteri tager sig godt betalt for det, og når de defensive kravlister får lov at vokse uden begrænsninger, jamen, så bliver det dem og ikke anvendelighed eller nye funktionsmåder, der kommer til at bestemme de webtjenester vi får.
Det bizarre her er at lovgivningen - og Datatilsynet - ikke skelner mellem data, som der er tale om her - elevplaner - og så de virkelige kernedata i systemet, vores skattedata osv. Hvis man fryser alle vores mellemværender med staten på Fort Knox-niveau, så lukker man ned for en masse kreativitet ude i kanten. Tænk nu hvis man tog det lige så sikkerhedsbevidste syn på sagen, at man skal arbejde på at undgå at risici for noget så harmløst som elevplaner eskalerer til alvorligere områder.
So Apple passed Microsoft in market cap the other day, and the really interesting thing about it was that it wasn't really bubbly stock market frenzy that did it. Apple is trading at a 21.50 P/E which isn't absurd, even if it is higher than Microsoft's.
Why is Microsoft stalling? Because they already won.
Some years back, before she was fired from HP, Carly Fiorina made a remark about the future of HP to the effect that since the market for PCs is pretty much completely saturated already, you can't really expect the tech sector to outgrow the economy as a whole. Companies are already spending all they can on IT. There's a soft ceiling somewhere, for how much of your revenue can go into tech spending and the corporation is approaching that level.
Microsoft gets all it's revenue from this no longer expanding slice of tech spending, so there is simply no way to grow beyond the few percent of general growth.
The deflationary power of technology can push these limits a little, but the key deflationary force these days isn't the PC as such but rather the internet and the networking of businesses.
Apple on the other hand has a lot of market share to gain in laptops and phones. The only saturated Apple-market is digital music players.
Sidste uge mødtes en lille gruppe mennesker og diskuterede Apples nye lukkede regler for brug af iPhone-platformen og Facebooks massive landgrab annonceret under den nyligt overståede F8 konference.
Snakken var god, men en smule svær at referere. Det kunne være blevet en meget aktivistisk samtale om principper, men det blev istedet en forholdsvis bred samtale om hvordan teknologiverdenen egentlig udvikler sig, i en dynamik mellem markedet - der er bedst til "dyb" innovation - og de enkelte virksomheder - der er bedst til fokuserede hurtige fremskridt. Vi snakkede om at der er en dynamik mellem de to måder at udvikle på. Virksomhedens forsøg på kontrol skaber værdier for virksomheden, men provokerer også konkurrence andre steder, og i det lange perspektiv udligner den slags sig så.
Virksomhedsperspektivet på åbenhed vil være at det er nødvendigt for at vokse - i et stykke tid. På et tidspunkt vil det ikke nødvendigvis, for den modne virksomhed, give mening mere, fordi det ikke genererer mere vækst - og så er det fokus flyttes ud på markedet igen, hvor åbenheden kan producere innovation.
Så diskuterede vi om der trods alt er noget anderledes ved de nye nær-monopoler. Ved Google, ved Facebook og ved app-storen. Om man kan udkonkurrere den enorme akkumulation af data som f.eks. Google har. Konsensusformodningen efter et stykke tid var nok "ja", men det tager helt klart tid.
Vi snakkede om hvad man bør stille af krav til sine platforme, for ikke pludselig at være med i en lang sej dødskamp, hvor kontrollen strammes for at tjene flere penge. "Der skal være en kattelem", sagde Niels Hartvig, og brugte sit eget Umbraco som eksempel. Der er ingen tvivl om at Umbraco-projektet primært drives af den sponserende virksomhed Umbraco, men kode er kode, og hvis balancen mellem Umbraco - virksomhedens - interesser og Umbraco - open source CMSet - bliver uspiselig for nogle af deltagerne/brugerne af projektet, så kan de gå deres vej og tage koden med sig. Det vil koste, måske, på innovationstempoet, men ikke på virksomhedens strategiske beslutninger.
Vi diskuterede også hvordan man ikke skal glemme at nogle af de her truende monopoler er blevet monopoler ved overhovedet at etablere et marked, og det er lige præcis det en god privat sponsor kan bidrage med. Det er Amazon, der med Kindle har skabt verdens første volumenmarked for e-bøger. Det er Apple, der med iTunes og App-store har skabt markeder for mobil musik og software. Det er Google, der har skabt moderne søgemaskinemarketing. Man kan ikke bare sådan på principper sige "åben=godt". Der er nogle kvaliteter - transparens etc. - der er gode for markedet, og nogle problemer når markedsdeltageren pludselig også er med som markedsdeltager f.eks., men det er ikke så sort hvidt.
Dynamikken er mellem de krav vi andre stiller til markedsdanneren før vi synes der er lavet et økosystem, og så de krav markedsdanneren har, før det overhovedet giver mening at lave et marked.
Vi snakkede om hvordan modstillingen "åbne platforme" vs "dybt integrerede, brugervenlige platforme" som man set lavet for Apple-casen, er forkert. Der er masser af convenience og brugervenlighed i at være åben; at lade folk drage de fordele af f.eks. iPhonen som de gerne vil.
Vi snakkede forbløffende lidt om Flash og de konkrete forhold v Apples særlige lukkethed. "Langsigtet dynamik vs direkte kontrol" opsummerer balancen i kontrollen fint der, kom vi vist frem til. Vi diskuterede om Apples argument at "kontrol giver bedre software" overhovedet passer. På kort sigt måske, men på langt sigt udelukker man innovation, og der er plausibelt at det gør platformens langtidshorisont mere ustabil, end den ville være som åbent marked.
Endelig så snakkede vi om, for tilfældet Facebook, hvorvidt Facebooks sociale graf er lock-in eller ej. Hvordan beskytter vi os mod en alt for absurd switching cost engang i fremtiden. Thomas Mygdal foreslog et par spørgsmål om det at runde aftenen af med.
Forslag til hvordan man kan lade være med at aflevere ejerskabet til al den adfærd Facebook samler ind. Til hvordan man kan satse på en distribueret identitet, istedet for den Facebook-identitet, der lige er blevet aktiveret på tusindevis af websites, og endelig om ikke Facebooks seneste afprivatisering af data vil føre til en øget offentlig opmærksomhed på om firmaet overhovedet lever op til databeskyttelseslovgivningen rundt om i Europa.
Der var almindelig tro på at der også er en vekselvirkning mellem det distribuerede og det monolitiske/monopolkontrollerede, men stor spredning i budene på hvad tidshorisonten så er for at det flipper over til mere individuel kontrol igen. Sådan som hovedregel, tror jeg budet var, at der måske er en 5-års horisont tilbage at være ubekymret monopolist i for Facebook.
[Sådan så aftenen ud, set fra min notesblok. Det er farvet af den ballast jeg selv kom ind i samtalen med, og dækker naturligvis ikke hvad alle de andre har tænkt; samtalen var god og mangfoldig. Men en slags referat er det dog]
So, the last couple of Facebook UI changes have been about making Facebook more Twitter-like, stream-oriented and conversation-oriented. With the recent announcements from Twitter's development-conference and Facebook's new stuff today however, the two companies are on clearly divergent paths
Twitter's future is about enhancing the value of real time*.
Facebook's future is about enhancing the value of identity**.
This is pretty much how it used to be, before Facebook did all that work to fend off a perceived threat from Twitter I guess, but having the two companies formulate their future vision within just a couple of weeks makes it clear how completely different they are.
* with more data and new relevance engines.
** sharing the identity to a broader audience than now, making it easier to mix in richer data with a profile.
It seems we're in for a reversal of the privacy defaults on the web, unless you want to stay off the social media most of the time. Facebook is reversing the default anonymity we're enjoying now. Sure, doubleclick maybe already did that a decade ago, but at least we've had the illusion. That's all gone now.
This means that the content filters that were previously only relevant for walled-in corporations who did not want their frivolous employees to do frivolous non-work suddenly become relevant for all of us.
It's okay to wear a badge as long as we stay in Disneyland, but we'd probably like to take it off when we're passing through the red light district.
I see a great new market here for the filter vendors. Personal reputation management.
Why is the web landscape different today than it was in the mid 90s? Is it because desktop computers are way better? Is it because our idea of what makes a good website has changed? No. It's the tools, stupid. You can read Joe Krause's post on building Excite vs building Jot as an example. The tools available in the 90s were expensive and cumbersome. Now, there's intense competition between largely free and very capable web frameworks, and on the server you have Apache and MySQL on commodity hardware to lower the price, which makes completely new services feasible.
This is what makes the web such a dynamic place. If you look around at the trendsetting web startups today, practically none of them use tools that even existed 10 years ago. They've switched to tools that are cheaper, but more importantly, tools that are way more efficient for the development team to use. jQuery and Prototype are improvements over roll your own AJAX frameworks. Small teams are reaping huge productivity benefits from Django and Ruby on Rails. Some have moved on from these, even.
Yesterday Apple turned off this particular engine of creation on the iPhone platform. You won't be allowed to innovate toolwise on the iPhone.
Apart from this being hideous, on openness grounds, it seems profoundly stupid. Nobody is using Flash to piss people off. Flash is used because it's an efficient way to grow rich media applications fast. This is important. Last year I wrote about the world of good it did for the ARToolkit to make the migration to Flash. Making a very useful library available in an efficient creative environment suddenly made this previously stale technology relevant again.
Likewise with Rails and Django. The specific needs of these popular frameworks have led to a revolution in the underlying webframeworks as well. Apache is no longer the sine qua non of the open source web stack.
It is simply not the case anywhere that most of the useful innovation around an open platform comes from the platform owner/sponsor/creator.
The notion that keeping people on inefficient Apple tools will make apps better is plainly ridiculous.
That Apple values corporate control over the value extraction on the iPhone platform over innovative pace is a bad omen for users and developers.
[Update: Jean-Louis Gassée's post on the matter is good, doing a way with the silly "good for users" defense and simply framing it as a questions of owning the momentum. ]
[Update II: Here's the thing: This wouldn't feel half as rotten if those had been the rules from the beginning, it's the shifting ground, the bad stewardship of the market that really takes the cake]
The conversational revolution of the internet during the last decade has brought about a lot of companies trying to engage with us at a different level than just being corporations. Instead of dehumanizing Terms of Service legalese and Acceptable Use Policies we're treated to an onslaught of microcopy as social markers, that this is a place where people deal with people and we don't need a contract. We could call these social disclaimers.
The problem of course is that some times we just want the frictionless and guaranteed behaviour of the market, instead of conversational engagement.
My list of the top three most overdone social disclaimers is here
De forskellige reaktioner på håndmixeren, som jeg opfandt forleden, var en reaktion på den sædvanlige vi-snakker-om-nye-ting bølge på nettet, aktualiseret af Google Buzz, men ellers senest set i forbindelse med iPadden og sidste år med Google Wave.
Dag 1-reaktionerne på nye ting er som regel helt i skoven. Det er en blanding af at man ikke selv kan høre forskel på de rigtige ting, der trods alt bliver sagt, og så alt det andet - og så en række standardforvirringer i forbindelse med ny teknologi
The iPad introductory video has Jonathan Ive waxing poetic on the Arthur C. Clarke chestnut about advanced technology seeming like magic. This is the core of the Apple product promise. Stuff that works super well, with flair and finesse well ahead of the curve.
Magic is even built right into the rhetoric of the Steve Jobs presentations, with the "one more thing" style product reveals.
When the iPhone came out, a reasonable reaction was "can you really build such a thing". One is used to admiring the relentless drive to perfect software and objects, no feature is sacred, everything is expendable is something else can deliver a better experience.
The problem with the iPad is that it doesn't deliver on the product promise. There is simply nothing ungraspable about the device. The screen, the battery, the applications - it's all stuff we understand very well. We're being shown a magic trick that we all know how to do.
Maybe the second version will have the OLED screen and 20 hr battery life this one was supposed to have.
Dengang jeg arbejdede ude i Ascio, snakkede jeg tit med vores systemmand, om sikkerhed. Virus er hvad det er, der er mange, og man skal passe på dem - men fordi virusproducenterne angriber så bredt, så bliver de nye virus også opdaget, og der kommer beskyttelse i antivirussoftwaren. Hvis man virkelig skulle lave noget farligt på nettet, hvis man virkelig skal være bekymret over noget, så skulle det være et determineret angreb, megt snævert, på en organisation. Angriberen ville selv teste sin malware med alle kendte antivirustestere og sikre sig at den ikke blev opdaget af de automatiske systemer. Tilføj lidt social engineering til at få det første angreb til at lykkes; det kræver ikke meget mere end at overtale en sælger til at loade en webside eller en PDF, og så er du igang.
Det lader til at det er nogenlunde er sådan angrebet på Google*, det der fik firmaet til at melde at man måske trækker sig helt fra Kina, er foregået.
Nu skal man ikke tro på alt en antivirusproducent siger om hvor farlig verden er, men det virker som om angrebet er lavet på en forbløffende komplet måde af et forbløffende antal kanaler.
Der hvor aviserne altid ender med at rapportere vrøvl, er på spørgsmålet om hvem, der er i stand til at gennemføre sådan et angreb. I al almindelighed skal der hverken mange penge eller mange mennesker til at lave dem, bare talent. Og hvem ved hvor god kontrol det kinesiske statsapparat egentlig har over dygtige hackere. Det er lidt uklart om Google virkelig tror at angrebet er regeringssponsoreret - måske gik angriberne efter den slags info Google har nægtet at udlevere til Kina, så man må tro at hensigten har været at få fat på dem. Fra nogle af beskrivelserne lyder det nu mere som klassisk industriel spionage.
* der er angrebet et mindre antal andre virksomheder også.
Fredag var jeg turist til en konference om mobiltelefoner i Afrika. Det er et spændende emne; det er fantastisk at det virker, og mobiltelefoner gør alle mulige andre former for leapfrogging muligt, nu man har lavet den teknologiske leapfrogging og lavet telefoni overalt.
Vi lagde fra land med lidt stats, så dem kan I da også få nogle af: Worldwide penetration af mobiltelefoni er ca 2/3; altså ca 2 ud af 3 verdensborgere har en telefon. I afrika er tallet 1/3 - med enorm variation, fordi der er enorm forskel på hvor rige de afrikanske samfund er. Jeg har kun været i ét afrikansk land, et af de fattigere og der var såmænd også telefoner overalt.
Prisen for at bruge telefonen - set i forhold til indkomst - er skyhøj, men sms og telefoni går lige; datatrafik ellers kan man godt glemme at designe efter. Det er kun sponseret brug af det, der virker.
Dagen havde desværre mest lige-om-lidt projekter med, ikke nogle warstories fra etablerede succeser, men der var masser af lektier alligevel.
Lektie 1: Ting ankommer ude af rækkefølge. Det er ikke bare det her med at afrika pludselig er ligeså telefondækket som vesten, der er besynderligt, det er også rækkefølgen af effekter: Teknologien først, inden i den kommer så
Fra den mere traditionelt indlysende ende, hørte vi (fra Stine Lund) om uddannelse og pleje for gravide og nyfødte med mobiltelefonen som kommunikationssystem. Det handler meget om at lade infrastrukturen rulle sig ud hvor det er nødvendigt; og om at bruge telefonen som digitalt undervisningsmedium. Vanskelighederne i forbindelse med overlevelse og sundhed for gravide og nyfødte er, at dødeligheden ganske vist er høj, men det er stadig særtilfældene man leder efter; det er dyrt eller umuligt bare at rulle en høj standard ud over det hele, så behovet er i bedre diagnostik og rapportering hvor kvinderne bor, og så adgang til hurtigt at eskalere til ordentligt uddannet hjælp når det ikke går som det skal.
Ovennævnte indlæg fra Herman Chinery-Hesse, grundlægger af BSLGlobal, ifølge Tomas Krag, fra refunite.org, der også talte på dagen, virkelig en star i sit hjemland, Ghana, var nok dagens højdepunkt. En af de sjovere pointer var den at international handel i Afrika er nærmest fuldstændig orienteret ud af Afrika, som et ekko af kolonisystemet. Infrastrukturen er lavet lige sådan, men det er ved at ændre sig, og Chinery-Hesse så det næste skridt op som noget, der i høj grad handlede om bare at få Afrika til at fungere som et marked, ligesom EU er et.
Udfordringerne er enorme og meget Afrika-specifikke: Robusthed, adgang til strøm til kommunikation, analfabetisme - altså hvordan tilpasser man teknologien til et marked hvor de handlende ikke kan læse og skrive, transport, regulering. Men omvendt, så er potentialet stort, netop fordi al handel nu er så lokal; værdiefangsten foregår med internationale handlende, meget tæt på råstofkilderne, og meget lidt af forædlingen med tilhørende værdi, foregår i Afrika eller på afrikanske hænder.
Vi hørte også fra en Nokia-finne, om brugerresearch i Østafrika, om hvordan Nokia arbejdede med at bruge de facto distributionen af musik - piratkopiering - som social bærer for nogle af telefonens muligheder. Der manglede lidt detaljer der, men det lød super spændende som idé.
Og vi hørte om hvordan man laver vedligeholdelses- og dieselfrie mobilmaster langt væk fra alfarvej, og sidst, men ikke mindst, antydede Tomas Krag nogle af de teknologiske vanskeligheder ved at tænke på "Afrika" som én ting. Det er et indviklet sted; der er utallige lande, utallige mobilcarriers, utallige sprog, stor mangel på uddannelse og teknologikurven er der, naturligvis, men udviklingen ser anderledes ud, fordi økonomien ser anderledes ud. Man kan ikke planlægge efter hypekurven; men må tænkte et par skridt baglæns og tilpasse sig en enklere verden, som ikke har så meget at byde på teknologisk - udover rigelig besvær - men som til gengæld har en indviklet og levende social struktur.
Det mest interessante v. dagen var næsten at bolden ruller, uden at det har særlig meget med vesten at gøre. Innovationen tilpasset økonomi og teknologi rundt om på kontinentet bliver faktisk lavet, rundt om på kontinentet.
Om sammenhængen mellem regulering og vækst, navnlig for steder, hvor det ikke virker, se - lidt spekulativt måske - Paul Romers TED-talk.
I en dag eller to endnu, kan du også få folks noter fra konf'en ved at søge på Twitter.
Jeg har kigget en smule på fremtidsproblemet vand her på det sidste. Det der fik mig til at tænke på vand var et sci-fi koncept som tegningen ovenfor: Omvendte floder, store pipelines med havvand, der på ruten ind i de mere og mere vandsultne kontinenter destilleres til menneskebrug og til landbrug.
Hvor realistisk er sådan en plan? Det er lidt besværligt at få styr på nogen gode vandtal, men til sidst fandt jeg nogen et af de steder hvor problemet allerede er akut, nemlig Israel
Her bygger man for tiden en serie enorme afsaltningssanlæg der producerer 100 mio. kubikmeter drikkevand per år.
Israels totale tilgængelige vandressource før anlæggene er 2000 mio kubikmeter vand per år - egentlig alt for lidt til den befolkning der bor i området.
Et anlæg der producerer 100 mio kubikmeter, leverer altså 5% oven i den mængde. Man skal altså "bare" bygge 20 af dem for at have en fuld kunstig forsyning - incl. vand til et landbrug, der er nettoeksportør af fødevarer.
Hvor meget energi går der til at producere drikkevand? Hvis man skal gøre det selv uden smart udstyr så må man jo dampe vandet af og kondensere det igen.
Det er meget energikrævende. Det går en kalorie til at varme et gram vand en grad, her skal vi op til kogepunktet, og så også tilføre fordampningsvarmen - ca 5 gange så meget energi som opvarmningen fra 0 til 100.
I de højeffektive afsaltningsanlæg kan man ved at bruge filtrering under højtryk og genanvende den energi man hælder ind komme ned på at bruge 4 kalorier per gram vand, altså energi der svarer til blot at varme vandet fire grader, eller ca. 150 gange mindre end naiv fordampning.
Regnet om til den effekt der kræves til et anlæg i 100 mio kubikmeterklassen er det kun 55MW effekt der forgår. Under en tiendedel af hvad Avedøreværket producerer. Nogenlunde hvad man ville få ud af vindmølleparken i Københavns havn, hvis ellers den producerede på topkapacitet døgnet rundt.
Pointen ved de lidt kedelige omregninger er, at det faktisk lader sig gøre. Man skal ikke æde hele landskabet op med vindmøller, eller bygge 40 kraftværker, for at producere kunstigt vand nok til en hel nation. Et Avedøreværk eller to og 20 destillationsanlæg forsyner et land, hvor der bor omkring 10 mio mennesker.
Hvad angår prisen: Det israelske anlæg sælger vandet til ca en halv dollar per kubikmeter - en pris der dog sikkert varierer en del med prisen på energi. I København koster en kubikmeter vand for en husstand lidt over 40 kr.
En kunstig Jordanflod kunne man altså faktisk godt slippe afsted med at bygge på et par år. Det er ikke en mulighed for de fattigste, men det kan lade sig gøre. Vand til hele Israel for en milliard dollars om året.
Netbooks, so far, haven't really been interesting. They are cheap - and of course that's interesting in and of itself - but they don't really change what you can do in the world. Their battery life, shape, weight and notably software have been much the same as expensive laptops, just with a little less in the value bundle. Which is perfectly fine for 90% of laptop uses.
That's set to change, though. New software, assuming the network, and consumer packaged for simplicity, sociality and "cultural computing" more than "admin and creation" style computing is just about to surface. Fitted with an app-store and simplified, the netbook assumes more an appliance role than a general purpose computing role.
The hardware vendors are adapting to that idea also; moving towards ultra low power consumption and enough battery life that you simply stop thinking about the battery.
Meanwhile, Microsoft is busy squandering this opportunity. They simply don't get this type of environment, apparently - and are intent on office-ifying and desktop-ifying the metaphor. Where Bill Gates "a computer on every desk" was once a vision of not having computing only in corporations and server parks it is now severely limiting. Why do I need a desk to have a computer?
I thought Bing vs Wave makes an interesting comparison. Bing is a rebranding of completely generic search; absolutely nothing new. Not a single feature in the presentation video does anything I don't already have. And yet it's presented in classic Microsoft form as if it was something new and as if these unoriginal product ideas sprang from Microsoft by immaculate conception.
Contrast that to Google Wave, which - if it does something wrong - is overreaching more than underwhelming. And contrast also Wave's internet-born and internet-ready presentation and launch conditions. It's built on an open platform (XMPP aka Jabber). The Wave whitepapers gladly acknowledge the inspiration from research on collaborative creation elsewhere. The protocol is published. A reference implementation will be open sourced. The hosted Wave service will federate. It is a concern for Google (mentioned in presentations) to give third parties equal access to the plugin system - the company acknowledges that internally grown stuff has an initial advantage and is concerned with leveling the playing field.
Does Microsoft have the culture and the skills to make the same kind of move? I'm not suggesting that there's an evil vs nonevil thing here - obviously Google wins by owning important infrastructure - but just that the style of invention in Wave, based on other people's standards and given away so others can again innovate on top of it, seems completely at odds with Microsoft's notion of how you own the stuff you own.
So Wolfram Alpha - much talked about Google killer - is out. It's not really a Google killer - it's more like an oversexed version of the Google Calculator - good to deal with a curated set of questions.
The cooked examples on the site often look great of course, there's stuff you would expect from Mathematica - maths and some physics, but my first hour or two with the service yielded very few answers corresponding to the tasks I set my self.
I figured that one of the strengths in the system was that it has data not pages, so I started asking for population growth by country - did not work. Looking up GDP Denmark historical works but presents meaningless statistics - like a bad college student with a calculator, averaging stuff that should not be averaged. A GDP time series is a growth curve. Mean is meaningless.
Google needs an extra click to get there - but the end result is better.
I tried life expectancy, again I could only compare a few countries - and again, statistics I didn't ask for dominate.
Let's do a head to head, by doing some stuff Google Calculator was built for - unit conversion. 4 feet in meters helpfully over shares and gives me the answer in "rack units" as well. Change the scale to 400 feet and you get the answer in multiples of Noah's Ark (!) + a small compendium of facts from your physics compendium...
OK - enough with the time series and calculator stuff, let's try for just one number lookup: Rain in Sahara. Sadly Wolfram has made a decision: Rain and Sahara are both movie titles, so this must be about movies. Let's compare with Google. This is one of those cases where people would look at the Google answer and conclude we need a real database. The Google page gives a relief organisation that uses "rain in sahara" poetically, to mean relief - and a Swiss rockband - but as we saw Wolfram sadly concluded that Rain + Sahara are movies, so no database help there.
I try to correct my search strategy to how much rain in sahara which fails hilariously by informing me that no, the movie "Rain" is not part of the movie Sahara. Same approach on Google works well.
I begin to see the problem. Wolfram Alpha seems locked in a genius trap, supposing that we are looking for The Answer and that there is one, and that the problem at hand is to deliver The Answer and nothing else. That model of knowledge is just wrong, as the Sahara case demonstrates.
The over sharing (length in Noah's Ark units) when The Answer is at hand doesn't help either, even if it is good nerdy entertainment.
Final task: major cities in Denmark. The answer: We don't know The Answer for that - we have "some answers" but not The Answer, so we're not going to tell you anything at all.
Very few questions are really formulas to compute an answer. And that's what Wolfram Alpha is: A calculator of Answers.
Ubicomp er den gamle drøm om beregning i alting - og her er et virkelig godt slideshow, der diskuterer om vi ikke har fået det allerede uden at lægge mærke til det - i iPods og snedige telefoner og uventede remixes af virkelighedsdata med webdata. Man får virkelig aktiveret tankerne her.
When we built Imity - bluetooth autodetecting social network for your cell phone - we did - of course - get the occasional "big brother"-y comment about how we were building the surveillance society. We were always very careful to not frame the application as being about that, careful with the language, hoping to foster a culture that didn't approach the service on those terms. We never got the traction to see whether our cultural setup was sufficient to keep the use on the terms we wanted, but it was still important to have the right cultural idea about what the technology was for, to curb the most paranoid thinking about potentials.
It's simply not a reasonable thing to ask of new technology, that it should be harm-proof. Nothing worthwhile is. Cars aren't. Knives aren't. Why would high-tech ever be. And just where in the narrative of some future disaster does the backtracking to find the harm end? Computers and the internet are routinely blamed for all kinds of wrongdoing, whereas the clothing, roads, vehicles and other pre-digital artifacts surrounding something bad routinely are not.
What matters is the culture of use around the technology, whether there is a culture of reasonable use or just a culture of unreasonable use. And you simply cannot infer the culture from the technology. Culture does not grow from the technology. It just does not work that way.
I think a lot of the internet disbelief wrt. to The Pirate Bay verdicts comes from basically missing this point. "But then Google is infringing as well" floats around. But the important thing here is that Pirate Bay is largely a culture of sharing illegally copied content whereas Google is largely a culture of finding information.
I think it's important to keep culture in mind - because that in turn sets technology free to grow. We can't blame technology for any potential future harm; we'll just have to not do harm with it in the future - but the flip side of course is that responsibility remains with us.
I haven't read the verdict, but the post verdict press conference focused squarely on organization, behaviour and economics of what actual crossed the Pirate Bay search engine, which seems sound.
- that being said, copyright owners are still squandering the digital opportunity by not coming up with new ways of distribution better suited for the digital world, but the internet response wrt. The Pirate Bay that they just couldn't be quilty, for technological reasons, does not really seem solid to me, if we are to reason in a healthy manner about technology and society at all.
The What You Want-Web got a number of power boosts this week.
The What-You-Want Web is my just-coined phrase for the lock-in free, non-value-bundled, disintermediated, higly competitive computation, api, and experience fabric one could hope the web is evolving towards. Twitter already lives there, nice to see some more people join.
The important thing about all of these announcements is that they forgo a number of options for making money off free/cheap: Lowering the friction towards zero means the services have to succeed on their own merits. If they fail to offer what I need or want, I can just leave. I don't have to buy into the platform promise of any of these tools, I can just get the stuff that has value to me.
I think in 5 years we will remember Twitter largely as the first radically open company on the web. Considering the high availability search and good APIs, there literally is no aspect of your life on Twitter that you can't take with you.
P.S. (Also, three cheers for Polarrose, launching flickr/facebook-face recognition today. A company adding decisive value with unique technology, born to take advantage of the WYW-Web.)
Pretty good overview of what's wrong with URL shorteners. They destroy the link space, adds brittle infrastructure run by who knows who. We already know that the real value proposition is traffic measurement - i.e. selling your privacy short.
The problem of course is the obvious utility of shorteners.
This is all new stuff, the current state of the art is not how it is going to end.
I går var jeg i Århus, primært for at se Enter Action på Aros, men også for at hilse på bekendte og besøge NEXT expoen*.
Enter Action er afgjort et besøg værd, endda en rejse værd. Udstillingen er godt nok lidt ujævn, efter min mening, og bærer stadigvæk vidnesbyrd om hvor svært teknologi som materiale er, det kniber stadig en smule med at komme forbi materialet til et udtryk, selv i sådan et creme de la creme setup som her. Nu er teknologi jo i mellemtiden blevet så almindeligt omkring os, så det fint går ind som selve materien i udstillingen også, og det hjælper bestemt lidt på det. Flere af værkerne gør en dyd ud af den synlige teknik.
I enestående grad bedst på udstillingen er Listening Post (video af værket her), der med lyd, mørke og en monumental opsætning af digital tekst hentet fra nettet, sagtens kunne bruges til en hel dags besøg.
Måske har det lidt med hele det her med at rejse efter kunsten også at gøre. Listening Post, i modsætning til de fleste andre værker på udstillingen, har et kultisk monumentalt præg, der fungerer godt som mål for rejsen. Cærket er naturligvis allerede gennempræmieret.
Noget, der så ud som om det kunne være fantastisk, de robotiserede rullestole, fungerede ikke rigtig mens jeg var hos dem, til gengæld var jeg glad for at en virkelig energisk teenager med god balance, gav en flot demo af det utrolige visuelle løbebånd. Jeg havde ikke selv holdt i 30 sekunder.
Mest irriterende på udstillingen: Et kedeligt rum med net-art i browsere. Blah. Aros skal naturligvis linke til det på hjemmesiden, men det er altså ikke noget der funker på stedet. (og dog, jeg må ikke glemme at der er et so last year Second Life rum også).
Mest overraskende på udstillingen: Mit hjerte slår, hvis Pulse Room står til troende, i valsetakt.
Bonus Aros-anbefaling: Gå ikke glip af Lars Arrhenius' skeletvideo et par etager højere oppe. Der er en Tim Burtonsk grum humor over den skeletale hverdag i Stockholm.
NEXT udstillingen var udemærket, men ikke en tur værd i sig selv. Ikke helt så cutting edge som tidligere, men betydeligt mere bred - materialevidenskab og andre ingeniøragtige indslag var også med. Det var dejligt at se The Orb, Wiremap og Cellphone Disco i virkeligheden, men som den sætning også antyder, så er det ikke let at være NEXT i blog-monokulturen. Mange af de objekter, udstillingen kan finde frem, har vi alle set på nettet.
Og så lige en pet peeve: Hvorfor står der en Z-machine printer på udstillingen hvert år, når den ikke kører og printer udstillingsgæsternes egne 3D-kreationer?
Og hvorfor var der ikke nogen Siftables?
* Konference skuffede mig totalt med sin mangel på servietorigami til lunchcam'et, men ellers var Matt Webbs indlæg (og det med naturen) som jeg så på videostrøm, virkelig godt - sorry Nikolaj, men jeg nåede ikke at se din preso.
Kunstig Intelligens er som regel noget med elegante søgealgoritmer. Man har en masse data, og vil gerne vide noget om dem, og så bruger man en af en række af elegante søgealgoritmer; der er forskellige snit - brute force, optimale gæt, tilfældige gæt. Inde i kernen af sådan en algoritme ligger der en test, der viser om man har fundet det man ledte efter.
Det er jo enkelt nok at forstå, når man ikke bare kigger på det magiske resultat. Mere punket bliver det, når testen der er kernen i algoritmen, udføres af en laboratorierobot. AltsÃ¥ af en rigtig fysiske maskine, der arbejder med rigtig fysiske biologiske systemer i laboratoriet.
Sådanne maskiner findes faktisk, ihvertfald en af dem. Og den har lige haft et gennembrud og isoleret et sæt gener, der kodede for et enzym, man ikke kendte den genetiske kilde til.
Wiredartiklen har mange flere detaljer, hvad der gør det ekstra trist at vide at Wired Online lige er blevet skåret drastisk ned af en sparekniv.
Jeg sidder og læser transcriptet af idésessionen til Raiders of The Lost Ark. Steven Spielberg og George Lucas briefer Lawrence Kasdan om historie og personer, som Lucas i forvejen har udarbejdet sammen med Philip Kaufman.
Det slående ved at læse historien er, at ingen af deltagerne interesserer sig det mindste for andet end maximal spænding og underholdning på en realistisk måde. Der er ikke så meget opmærksomhed på andet end at det skal blive en fantastisk oplevelse. Og billedet af underholdning er helt igennem bygget på hvordan man så selv modtager historien. Der er meget få indre mål, meget lidt obstruktion hvad angår hvordan det skal gennemføres. Det handler kun om at få underholdningen til at lykkes.
De problemer der er til diskussion er bare "Kan man forstå det?" "Bliver det kedeligt?".
For ikke så lang tid siden snakkede jeg med et par dygtige konsulenter (Kim og Ebbe herfra) om deres tidligere arbejde i spilindustrien. Vi snakkede om optimering, og hvordan de af og til havde ting, der skulle laves som tog nogle millisekunder - men stadig alt for lang tid. Spillet skal jo holde en høj framerate, for at oplevelsen er der, og så er millisekunder pludselig dyre.
- Der er noget sundt ved at have oplevelsen som sin eneste succes. Det eliminerer en masse undskyldninger om hvad man skal bede folk om for at nå sine egne mål, eller deres mål. "Nyttigt" eller "vigtigt" giver stadig plads til masser af kompromiser i oplevelsen. Når oplevelsen er det hele, så er der ikke andet end brugerens fornøjelse med det man har lavet. Det er sundt på to måder: Dels eliminerer det dårlige undskyldninger. Dels så kommer man helt udover kanten. Fra bare "nyttigt" til "fornøjelse" er der et langt og vigtigt skridt.
The Chrome Experiments seem like an odd departure for Google. All Google properties seem to focus on fundamentals, not experience, but here you have a group of experiments that are pretty much exclusively about experience. Also, what happened to white?
Of course the experiences highlight that fundamentals make a difference: Casey Reas' blog post seems very on the mark to me.
Chrome and its fast JavaScript capability offers a glimpse of a Web without proprietary plug-ins.
[...]
Technically, I think the greatest innovation of Chrome is launching each Window or Tab as a separate process. If you try to run Twitch on Firefox it starts to slow down as more windows open. Each mini-game competes for the same resources from the computer's processor. In Chrome, because each window runs separately, the frame rate remains high.
Don't miss out on Sascha Pohflepp & Karsten Schmidt's socialcollider (which, sadly, sucks performance wise in Firefox)
All of this goes to show that technology is never done. There's always more to do, something different to do. Search is not done either. Where the money goes is not done.
ARToolkit has been around for ages (10 years), but until quite recently was sadly locked up in C and C++ libraries. Recently it has been completely liberated and implemented in Flash, Processing, Java, C# and now runs on any toaster. The power of that is just huge. The usual first instinct with ARtoolkit is "Oh look at this virtual stuff on top of my real stuff. Then you have to suppress a yawn. But now you have all of these platforms and people with a completely new perspective start using ARToolkit in completely different ways.
The T-shirts are nice, but actually ARToolkit is not a good match, since it's not really a 2d code, but pretrained recognition (read the whole thing, Claus - they added 2d barcodes on top of the FLAR-pattern).
The calibrate and forget on a fixed surface works well.
The one I really think is brilliant is using non-recognition as the signal, with recognition as the steady state.
The ARToolkit code really benefits from coming out into the fresh air of Processing and Flash. New people with fresh energy to experiment and different ideas about what its for.
If you are doing print on demand, this fingerprinting technique lets you fingerprint the actual paper you're printing on. The ultimate document DRM.
Industriel produktion er ved at blive social - jo mere social produktion bliver, jo flere sociale værktøjer får vi til at producere med.
Monome er en populær minimalistisk digital kontroldims til musikere og andre kreative mennesker. Eller, populær er måske så meget sagt, for den er kun lavet i et par meget små oplag - at producere storvolumen elektronik er dyrt og risikabelt. Til gengæld er det den eneste måde at lave elektronik billigt på, så man er lidt fanget i en "sjældent eller risikabelt" fælde. Man kan enten tabe penge, eller arbejde meget dyrere.
Sådan har det ihvertfald været indtil for ikke så lang tid siden, men regnestykket er ved at gå i opløsning. Monomes delvise løsning på problemet - at montage kun kan bestilles i volumen, hvis det skal laves af kinesere - er at sælge Monome som kits, i løsdele. Der er simpelthen nok mennesker, der kan samle selv, til at det giver en vis spredning.
Næste problem for den potentielle Monomekunde: Selv kitsene har der været underforsyning af.
Her kommer næste udvikling på banen, nemlig den stigende talentmasse for bare at gøre det selv. Folk er begyndt at tage deres arduinoboards og designe interface kits til dem, så de kan bygge deres egne Monomes op på Arduinoplatformen. Her stiger vanskeligedstærsklen ganske vist betragteligt - man skal fremstille et printkort, der kan bruges i sit designprogram, finde nogen til at lave det, og så skal man håbe på at det design man havde lavet ikke havde fejl, så kortet brænder sammen når man tester det.
På flickr var der en der gjorde lige netop det for Monome, uploadede et billede af printkortet, og startede en indsamling for at få produceret et batch - billigere i mængder, som sagt. Tom Armitage kalde det for fanufacture, når fans af noget fysisk overskrider tærsklen til produktion.
Den form for interesse er ved at gro sociale værktøjer til produktion frem. Fra Holland har vi i et stykke tid haft Shapeways, der printer ting i 3D - altså også din nydesignede Monomekasse. Shapeways har en social struktur - man kan dele sine objekter, så andre kan få dem trykt også. Nu har hobbyprintkortproducenten BatchPCB introduceret den samme sociale deling af printkort, der faktisk kan bruges. Så printkortdesign er nu blevet en social færdighed, hvor det før ikke rigtig havde noget socialt aspekt - undtagen i særtilfælde som med Ardinomen.
Det er vigtigt, for hverken 3D modellering eller printkortdesign er trivielt. Bare forskellen på 3D, som du gerne vil kigge på, og 3D, som du gerne vil holde i hånden, er stor (det sidste er vanskeligere).
Der mangler stadig en del - montage og lodning er ikke trivielt - men det er dog færdigheder, man langt nemmere kan lære end hele kæden fra idé til produkt.
I can't think of a better post to have ID 4000 than a link to this remarkable test of genetic programming. Can you paint the Mona Lisa with just 50 polygons?
(as some people in the comments point out: "It's not really the genetic algorithm, but a stochastic hill climber")
Actual 4K posts won't happen in a good while, I'm at 3700 or something.
The Google News/United Airlines sell off sounds more and more like an easily doable, virtually anonymous way to make a lot of money. If all it takes to promote an enormous drop in a highly liquid stock is a large number of clicks on an article in a regional newspaper's website, then the information ecosystem around the stock market is just too porous and dispersed for anyone to effectively control.
One wonders if the original website clicker later bought United stock.
Esquire har udgivet verdens første papirmagasin med aktivt e-ink cover. Der er et magasintilpasset batteri med i pakken og en reklame i levende billeder for sponsoren Ford. Det lyder som et medie fra fremtiden.
The awesome scale of the Large Hadron Collider must be one of the best arguments for big government since the manned mission to the moon.
For Elvis Costello fans it is of course more or an acronym collider: Costello has often referred to himself as an inadequate guitarist as LHC - Little Hands of Concrete.
Things I learned from Puredata I:
Puredata is a dataflow language. You connect inputs and outputs to boxes that process the flowing data and outputs other flowing data. As data flows it generates more feedback.
Now, a lot of the time, you want data in some place - but you don't want to cascade more actions*. So Puredata has a common abstraction distinguishing between hot and cold input gates. Hot gates trigger an output pulse. Cold gates just deliver data, but don't trigger more activity.
I want this abstraction built into all my communication devices and all my social networks. There are tons of inputs I want to follow, tons of people whose input I care about - but there's a distinction between caring about inputs, as they apply to stuff I am already paying attention to, and caring enough to want to be actively interrupted whenever there is any news at all.
Right now hot or cold is a feature of a particular medium, more than a specific aspect of a particular communication between me and someone or something in a medium.
*It's an energy thing - you can't have loops at all if every pulse must generate more pulses. So a language that always pulses can't be a very sophisticated one.
When the iPhone came out someone said something to the effect that "Oh my god, this really is the shiny, visual, enigmatic do all device they have in science ficton movies". It's impossible to not feel the same sensation when you look at the All Things D footage of Dean Kamen's the armless testsubject and his new Terminator-arm.
The arm scares the shit out of me, but in the best possible way.
Hvis man gerne vil have en blødfri Reboot, så kan man prøve med Økonomi, åben hardware, test, Dorkbot, Open Social og Arduino
En teknofri Reboot går omvendt uden om dem her.
Reprap shoes. That must be the end of any purchasing doubt.
MMS - Chord Store from chris durftal on Vimeo.
Cylob laver elektroniske musikgadgets.
Bonus: Gul trommerobot - ikke lavet af Cylob, men af Frits Lyneborg, der vist huserer rundt lige her i København? Er du derude, Frits? Hvis du kommer til Reboot så tag da for guds skyld nogen robotter med!
UPDATE: Jeg troede lige trommerobotten var sjovest, men det var før jeg fandt denneher video, der er helt brillliant, navnlig fordi materialevalget - og radioen i baggrunden - er så ærkedansk. Og så er fejltagelserne undervejs, og teksterne, fantastiske.
The Copenhagen Institute of Interaction Design is both a cool place about cool stuff - and their website has a really nice background flash-toy to boot. If only I was a student in need of a place to study/research.
Plazes er solgt til Nokia. Konkurrencen om at være dem de andre får din geoplacering fra er stigende. Plazes har noget automatik på desktoppen der giver en fordel, men en vægt at features der trækker lidt i modsat retning, hvis åben placering med OAuth er fremtiden.
Very nice idea, a WiFi telescope, spatially resolving available wifi, so you can see where on the horizon the Wifi is, and whether it is open or not.
Det er på høje tid jeg får blogget den relative nyhed at reprap-projektet, der laver billige DIY fabbing maskiner, nu er nået til det punkt hvor maskinen kan producere alle mekaniske dele til sig selv, hvilket maskinens skabere med en vis ret fortolker sådan at maskinen nu er i stand til at bygge kopier af sig selv. De mangler at producere elektronikken på samme måde, men ikke desto mindre er der helt klart sket noget utroligt her.
Du bliver nødt til at lave en harddisk om det her, Føhns, hvis ikke den allerede har været sendt.
Forresten bruger reprap også Arduino, den open sourcede hackbare elektronik platform, der er blevet hardwarehackingens Linux, eller noget i den stil. Ordet "også" dækker over at jeg bloggede om et andet Arduino projekt for få dage siden.
Evnen til at reproducere sig selv er for kunstigt liv hvad Turing-testen er for kunstig intelligens.
En ting er at man kan forstå det er svært, men for den teknologisk bevidste kan det være svært at forstå at det nogensinde har været regnet for gådefuldt om det overhovedet var noget vi kunne, men man skal faktisk ikke tænke sig meget længere tilbage end til 50erne, før spørgsmålet blev alvorligt overvejet som et tvivlsspørgsmål. Før den digitale computer, før afkodningen af DNA, havde vi ikke et eneste konkret beskrevet eksempel på selvreproduktion. Derfor overvejede store hjerner, som f.eks. universalgeniet John von Neumann, hvordan man ville lave en maskine, der kunne bygge sig selv. Efter at have arbejdet på en mekanisk beskrivelse af en sådan maskine, blev von Neumann overtalt af sin kollega Stan Ulam til at forsøge at formulere den selvreproducerende maskine som matematisk abstraktion i stedet. Som det teoretiske substrat maskinen kunne gro i, udviklede von Neumann - med Ulam - ideen om cellulære automater, den slags forenklede beregningsystemer Stephen Wolfram siden har forsøgt at lade som om han har fundet på. Det lykkedes for Neumann at beskrive en maskine, som han beviste kunne reproducere sig selv - og det altså helt uden at have en maskine faktisk at afvikle programmet på. Såvidt Wikipedia er der faktisk endnu ikke nogen der har gennemført en konkret replikation på en computer, der afviklede von Neumanns program.
Den vanskelighed problemet udgjorde for Neumann illustrerer hvor fundamentalt anderledes verden med computere er end den gamle uden. Muligheden for bare at gøre istedet for at tænke over om hvad man kan, er en afgørende, forenklende forandring. Mit matematikstudium var stort set softwarefrit. Sådan er matematik ikke mere, og det er ærlig talt vældig godt for matematikken. Ikke at der ikke er stor værdi i den gamle arbejdsform endnu - von Neumanns maskine er endnu ukørt - men den iterative "det koster ikke noget at prøve"-kultur der følger med computeren er sund.
Polarrose har fået nyt look og ansigtssøgemaskine. Her er f.eks. Abraham Lincoln i mange udgaver. Tallet over hvert ansigt dækker over mange steder hvor det samme billede er fundet, så en nejs feature er altså at man så enkelt som muligt ser forskellige billeder af en person.
The Fotowoosh demo was promising when it came out but the Facebook gizmo doesn't quite work - not for me anyway.
Google has a new (?) graph generating API that provides GETable PNG graphs like this: . API documented here. Note the simple ASCII to data conversion which is a destined for "what word is this" fun in future blog posts.
It's clear now that the SCO lawsuits was really a poker game: SCO v. The Rest of The World No Limits Texas Hold'Em.
SCO picked up a weak hand, but since SCO was short stacked already, attack was seen as the only option and they made a heavy bet. However, the flop produced nothing for SCO and none of the other players were much impressed with SCO's bet, so they stayed in the game. SCO then moved all in, hoping to scare off some of the other short stacked players, like Novell, and then luck it out in court with IBM. No such luck - SCO simply didn't show enough power (constantly checking your cards, and asking the dealer pointless questions about the pot is not how you show strength, but how you fake weakness, to keep the other players in). Everybody stayed, and frankly, given the size of the pot involved, the showdown was not so much a showdown as it was a letdown. SCO just had nothing.
Sun is changing its NASDAQ trading symbol from SUNW to JAVA? To emphasize Java? At a time when all the cool stuff (1, 2) coming from Sun has nothing to do with Java? Is this a really strange joke?
Also, note how all the cool stuff has an "Open" prefix. And note how Java does not.
Jeg er nødt til at linke til denne oversmarte "find blodåren" patch, der med infrarødt lys lokaliserer blodårer der ligger skjult lige under overfladen. Jeg har ikke set den blandt narkomanerne nede på gaden endnu, dog.
From the description on TechCrunch, Foxmarks sounds like refreshing news - but turning bookmarking into search results changes the incentives for bookmarking, so I don't see why any kind of success of Foxmarks is not going to turn into the same spam problem Google is dealing with.
For nylig er der udkommet en struktureret version af WIkipedia med den hensigt at hjælpe en med at søge efter "russiske nulevende pianister" og andre strukturerede søgninger, som man kun vanskeligt kan lave i dag. Der er en hel masse teknologi (RDF, SPARQL, etc.) at sætte sig ind i, hvis man vil lege med det.
For at komme godt igang laver jeg, i en joint venture med Morten Frederiksen, en hacke-aften hvor vi skal lege hands on med DBPedia, som den strukturerede Wikipedia udgave hedder.
Du er inviteret.
Meld dig til ovre på hackebloggen. Semantisk web, med stort eller småt, er både varm luft og hot shit og - enten med stort eller småt - topvigtigt i den næste bølge, og her er en gå anledning til at gå ombord.
En god intro til DBpedia kan man læse her.
NY times reworked into nothing but printer friendly permalinkable URLs. Same thing for BBC too.
Bonus (possibly primary) feature: Usable from cell phones.
Its good to be able to say "I'm just experimenting with the design of networked objects" when you need to rationalize the purchase of toys. I just got the first tiny success building data driven Mindstorms robots. If you have anything, anything at all, you want or need to do, I suggest you don't waste your time watching the video above.
(note: The original screen play for this movie, called for a co-star, appearing from across the globe via Skype, to actually control the robot, which of course would have added to the excitement. That'll just have to be next time. Possibly with a sound sensitive robot aware of the controller even as it is being controlled. Possibly with a 'I'm sorry Dave, I can't do that" sound embedded as error message.)
Here's a feature I would totally dig in my newsreader: "Subscribe to this feed for a week".
The use case is this: You see some interesting post somewhere, you probably surfed your way there through 3-4 references. The post is interesting so by association the blog it's on/feed it's in is also mildly interesting. But there's a catch: If you just subscribe to the feed then 9 times out of 10 you're going to have to waste time later pruning it out of your feeds because your interests are only tangential to the feed - you just happened to meet that tangent on that particular day some time ago.
In related missing features. Why doesn't del.icio.us. have "subcribe to the feed advertised in the page where this URL is" - which is sort of the same thing.
This would require some feed metadata contract but I could see that spreading like wildfire - and most blogs specifiy some kind of feed on all post pages anyway.
OK, so here's some things I think will be really important for the next 10 years in IT, well for me anyway...
Day one of Where 2.0 was filled with social data aggregation, bottom up open geo-standards and platform owners. A couple of very nice Google demos of new deep integration between SketchUp, Google Earth and Google Maps:
KML annotation now works in Google Maps
Imagery from Google Earth can be put into SketchUp as textures
SketchUp models can be seamlessly added back into Google Earth
Everybody likes criticizing Google's perceived lack of focus and integration - but it was really hard to see any lack of integration today.
There was a Microsoft demo also of Microsofts answer for geodata. And yes, in both cases the question about the interface between the platform owner and the data creators is a big deal.
Underneath the giants we find the platform extenders: Platial and Wayfaring and all that. Some of these do personal "non serious" social stuff. Ownership is still important here, but the importance is more personal and political.
And then you have the "infrastructure extenders" like Zopto. Aggregating data on their own - but relying on the platforms for usefulness...
At the other end of the spectrum we also heard from OpenStreetMap. I'm as impressed with the project now as I was at Reboot.
Great fun and good hacking also in Socialight (very briefly), Gsmloc and most definitely the Gumspots project.
More Imity related notes on the Imity blog.
I remember a nice presentation years ago by someone from IBM in which he compared pervasive computing to the spread of clocks: the clock tower was equivalent to the mainframe, the grandfather clock to the minicomputer, the clock-radio to the PC, and the embedded clocks you see everywhere to the future of computing.
In an O'Reilly Radar piece on a custom hardware interface for traffic reports.
Nice - JabPhone is basically SkypeOut for your Jabber client. You can do voice calls to all phones and text messages to cell phones worldwide. Only Google Talk is supported, since Google Talk is the only Jabber client with Jingle Audio support so far - but that's just a temporary condition. Long live open standards.
Note: Rates (+50%-100%) and commercial packaging (this is a 0.02 release of service and web site) is not at Skype level yet - but that can all change...
Interesting conversation with John Kao about innovation. He repeats one of the classics: Ideas are cheap. It's what happens after the idea that's valuable.
It's odd reading if you come from a "small teams/agile teams"-type software culture, since so many of the ideas described are second nature in that kind of environment (I'll happily grant that this is a relatively new thing - and the interview goes back 6 years)
Here are some of the things Kao almost says that apply in software
CNETs photo history of the hard disk may be viewed as a pictorial companion to The Innovators Dilemma, which uses the history of the harddisk industry as the core case study for how innovation works.
Here are two simple economic arguments for why all the innovation in hot new web technologies comes from user/consumer oriented web applications.
It might be obvious, but I hadn't seen it written down as simple as that anywhere.
Vernor Vinge got it it right sooner than he imagined. Here in early 2006 the rate at which Apple introduces new products has officially become too fast to comprehend.
Douglas Rushkoff's "Get back in the box" says right in the introduction that you should forget about all the megatrends and all the good sounding storytelling/business crap around your product and just make a good product. I like it already. Sits well with cluetrain, with branding is dead, with marketing is dead, with "DRM is nog going to bring you more customers", and with everything I believe in to boot. In other "Just make something people actually want goddammit" news, here's Guy Kawasaki's also likeable idea about what you should aim for in your product.
I'm sorry but it is completely impossible for me to stomach the combination of pure marketdriven opportunism and communist totalitarianism in the press release from Verso Technologies on how they are helping the Chinese goverment curb free speech by filtering Skype in China.
"The trial is representative of the significant opportunities for Verso's products in the Chinese market, where VoIP is highly regulated and the use of Skype software has been deemed illegal," said Yves Desmet, Verso's senior vice president of worldwide sales, in a statement.Oh yeah, more than 1 billion non-free Chinese to censor. A huge opportunity.
Scumbags.
So Google's rebranding of Urchin as Google Analytics and the lovely new price tag of 0 is definitely going to be construed as a watershed moment of sorts, when Google decided to take out an entire sector. Expect an acceleration of the already happening "Google is the new Microsoft" meme - including all the antitrust lingo that goes along with it.
Sony BMG's response to the music CD malware scandal is unbelievable:
Most people, I think, do not even know what a Rootkit is, so why should they care about it?
In the words of Mark Russinovich:
Instead of admitting fault for installing a rootkit and installing it without proper disclosure, both Sony and First 4 Internet claim innocence. By not coming clean they are making clear to any potential customers that they are a not only technically incompetent, but also dishonest.
If I were an SBC shareholder I would be very worried since Edward Whitacre, the CEO of SBC, quite simply does not understand the internet. As mentioned on Nikolaj Nyholms' blog (and elsewhere, one place I saw was eWeek). SBC's CEO seems to think he's in a position to charge internet information service providers like Google or Yahoo for access to SBC's DSL customers.
There's two things wrong with that. First of all, I can't imagine the DSL customers wouldn't complain or even sue that they're not getting what they're paying for. While SBC may have a stranglehold on their customers right now, by owning the copper to their homes, technology just marches on to fast for any purely tech-driven monopoly to last. Your customers have to actually like you too or they'll switch to cable or WiMax or The Next Great Thing.
Second, Whitacre seems bizarrely unaware that the model he proposes has been tried - and failed - before. Many companies tried - notably AOL. I learned how similar the AOL model was to what Whitacre proposes, and why it failed, by watching this excerpt of the Brewster Kahle interview from the PBS Nerd TV series. I learned that rest of what I needed to put together my post title from this Nerd TV excerpt with Tim O'Reilly.
Niall Kennedy took a lot of photos at the Microsoft Live Platform technology preview. My favourite is the one I've blatantly stolen (and doctored) above (original here).
Why do I like the image? It gives you a good idea of how important you can hope to become as an alternate device/software/service vendor).
One thing that's unclear to me: Is Microsoft Live the cunning application level master plan that is supposed to be leveraged by Microsoft's bold reinvention of every web standard in Longhorn, or was the powerpoint show at the unveiling in fact the stack of slides some poor bizdev Microsoftee could come up with when she got the order to scramble so that Stevee Ballmer could "fucking Bury Eric Schmidt" and "Fucking kill Google"?
(Niall: Leave a comment to complain about my use of your image, if it's a problem)
When I first read about Amazon's mechanical turk. I hadn't realized that this wasn't just human data mining for Amazon properties but in fact was an open system (according to techcrunch). That's great! I remember reading (but not where, so the story may be apocryphal) about chinese typing factories that did manual OCR for western companies. The way they dealt with errors was by simply doing all the typing in triplicate and voting - which reduced the error rate enormously.
Now everyone can play - Amazon has effectively set up an open market for routine office/terminal work. Whether this would make sense for anyone not in Eastern Europe or Asia I don't know. It might make sense for teenagers or kids as a game-like activity that is also economically rewarding.
Actually, I remember having a problem with data washing something like a quarter million phone numbers. We had to come up with a cheap way to reformat all kinds of strange ways people write their phonenumbers into ISO format. We used to dream of an making an online service where would get people to do the work as a recreational activity - but this is much better. If I had that problem again I would experiment with sending it to the Turk.
Russell Beattie is amusingly underwhelmed by the latest slew of Web 2.0 upstarts - bothered by the enormous amount of copycats and possibly by the almost dronelike tone with which all these upstarts repeat the same mantra: social, rich media, events, blogs, feeds.
I think this particular kind of cranky is alwasy going to miss out on a lot of good stuff, but clearly there are many crappy ideas being thrown about just now.
While acid tongued new economy debunking is always good fun, in the end, the only memorable thing in the post is the mobility rant:
And finally, where is the goddamn mobility? This is what most depresses me most about these new sites. Not being able to use my mobile to sign up and use any new site or service that’s launched now is completely inexcusable. I don’t care what you’re doing, you’re wasting your time and the 20 seconds I spent even checking out your site. The future is so obviously in mobiles, why the hell are so many startups still screwing around on the desktop? Morons.
Even this I'm not so sure is right. Plenty of life left in the plain old web without mobile access.
I think there's a decent chance that China will get to Mars first. They're huge. They're growing. They're hungry. They don't mind risks.
Going to Mars is going to cost bundles of money. It's also going to require tons of technology, but technological leapfrogging is already happening in China and once you have a 200 mio middle class providing hundreds of thousands of engineers every year you're definitely going places. Going to Mars is still largely a technological pipe dream. The project that will put together the space vehicle that will go to Mars first has still not been started. Leapfrogging means that as long as the other guys haven't actually started to build the damn thing you still have an excellent shot at catching up.
Since SAS is still close to being the only airline offering wireless over the Atlantic, does this then mean that techies travelling to Europe will use Copenhagen as a hub to get the wireless?
Considering how many times I have had agonising waits at Heathrow on my way to or from North America (> 30% luggage screw up during transfer. Includes one lay over due to gale force winds) I can certainly appreciate the irony.
Too many of us seem to believe that specializing in something simply means you don't know about other things. I could, for example, call my mother a Windows specialist, because she has never used Linux or OS X. Or, I could say that my relatives out in the countryside in Arkansas are country music specialists, because they've never heard anything else.From the chapter Be a Specialist from this book. (And for the hardcore geeks it's important to know that there's another chapter titled Be a Generalist).
It annoys the heck out of me that "works when you're offline" for most companies, including Google it seems, means "Without a connection to the internet". There's another kind of offline that is adversely affected by the first kind of offline, namely offline in the sense that I'm currently not at my own terminal. That means I'm offline in the sense that I have no context instead of having no conduit.
As data moves to the edges this does not get easier. The mass storage of GMail and similar modern webmail could be easily augmented to deal with this kind of offline as well, but that's not a focal point for any of the companies I can think of.
I think it's strange that it isn't for Google. Here's the one thing they have that Microsoft does not have more of. Servers. Microsoft is tied to devices I own. Google isn't - except that they are tieing themselves to these devices with e.g. the new Google Desktop.
So please Web 3.0 and 4.0 companies repeat after me:
I am not my terminal. I am not my terminal. I am not my terminal. I am not my terminal...
Maciej Ceglowski, he of the audioblogging manifesto and the hackers are painters debunking, has done it again - this time he tears the space shuttle program, and the strange circular logic that keeps it running, apart.
I am unsure whether any space program could tolerate the scrutiny Ceglowski applias to the shuttle. Clearly there is a huge public investment in any space program that is never recouped even if commercial satellite launches are viable later on.
Secondly; if it weren't for circular logic a lot of stuff would never have been built in the first place. We call it bootstrapping for a reason. The world needs Münchhausens.
The race for petaflop supercomputing (equivalent to something like 200,000 PC's) is on. the IBM Blue Gene project has been going on for some time and toppled Japan's Earth Simulator half a year to one year ago - but now Japan is set to build a 10 petaflop machine a remarkable 73 times faster than the currently fastest version of the Blue Gene machine.
It's supposed to go live as early as 2010 which means that top speed of computing clusters will more than double every year for the next five years. Obviously since we're talking clusters, money buys almost any annual speed increase you like. The management hell of running the clusters if you don't innovate will of course kill you in the end in the form of permanent never ending hardware replacements and impossible energy management.
"Why don't you put on this biometric gear and lut us take care of you, so you'll have the experts on your side"
The permanently on, wearable computer of Steve Mann integrated with big brothery web based monitoring to create the real nightmare of tomorrow: Big Mother - the dictator we asked for.
My rule of thumb is that it should take no more than 3 months to go from conception to launch of a new web service. And that's being generous. I'm speaking from experience here. I developed the first version of ONElist over a period of 3 months, and that was while working a full-time job. I developed the first version of Bloglines in 3 months. By myself. It can be done. And I suck at it! Just ask all the engineers who have had to deal with my code.Some more of the bootstrapping gospel.
So I was missing something new and interesting at Reboot, but later I was talking to Dalager and he reminded me of ActiveGrid (that I didn't remember knowing about at the time) which would have fit right into the conference, possibly along with this Adam Bosworth talk.
Basically ActiveGrid + what Bosworth is talking about is the "Small is the new big" of data. It's about not going big, but just using the network to build big applications fast on a free platform and lots of cheap hardware. The free availability of easy to use quality tools (To Be Evaluated) for this kind of thing is to my mind what makes for exciting times.
That chip you're about to have implanted with an interface directly to the visual cortex now has a power source.
Google's web accelerator is easily the most controversial "enhancement" to the current technological infrastructure of the web since VeriSigns abominable Sitefinder. Web apps are breaking left, right and center due to overzealous prefetching. Reports aren't in yet on the extra bandwidth cost for content providers incurred by same prefetching. And furthermore it seems the application itself is a mess, sending different people the same cached session dependent cached pages from webapplications.
[UPDATE: This problems may only be a problem by appearance, as reported here]
As is written down all over the place, the broken webapps are probably not coded in true RESTian fashion but breaking them seems to be a violation of the 'liberal in what you'll accept, strict in what you send' philosophy that keeps the entire internet running.
This wouldn't be so bad if it was just happening to a few people, but this is backed by Google. It's like if the biggest automaker in the world promised all customers a free exchange of their current car with a Hummer or some similar "I Own The Road" gas-guzzler. A nightmare.
Alternate fave analogy: It's like Mr Burns planning to block out the sun so his nuclear plant will make more money.
It took 36 hours, but my Google tag search seeding worked. I can now search for johnny within tag 'googlehacks'.
Time to get busy with the automation of the RSS seeding.
Who knew? You can specifiy of that a URI is to be treated as data, not a reference. It doesn't work in IE, but for the rest of you there might be a miniature graph next to this colon:
Details on how it works here.
(Via Sam Ruby)
[UPDATE: del.icio.us tag search field test underway]
By exploiting the same mechanism that google bombs rely on, you can generate your own page sets and search only those pages with Google. A minor example.
Since text in links leading to pages count as text on a page, you can simply doctor link text to generate your search subset by including a special token like 3f4h9hgfh98fh9p348gh .
Now searches for e.g. hosting can be bound to the pages I link to by requiring my link identifier token in searches. By augmenting the scheme a little (e.g. by agreeing on a naming convention along the lines of gtag_tagging for the google tag 'tagging') we could have google search of our tag database quite easily. Obviously this is just a poor mans replacement for being able to limit search by forward links.
Desktop search engines are nice, and tagging is all the rage, but tagging still isn't all that, because del.icio.us isn't also a search engine.
Obviously what we would really like is for a major search engine to just embrace tags. This of course would lead to immediate scandal, because everybody would be screaming "Autolink Scandal!" 2 minutes after launch. After all, it's not really that nice for a content provider that I get to augment his content and tagging does exactly that. (Incidently, have the political defamation tags - a la Google Bomb - begun to appear?)
If google tagging were to become widespread, Google should embrace it and simply start supporting real tags through e.g. rel attributes (like technorati) and Tim Berners Lee should declare victory for the semantic web. There is no important distinction between RDF and tagging at present. Later it might be nice to turn tags into full blown RDF assertions to qualify them. Berners-Lee might not see this is semantic victory though, because the ideas on semantic assertions based on RDF metadata will not come to pass because of tagging.
On closer thinking: Actually Bruce Sterling is extremely old fashioned. In the quote below he pretends that the device he uses to connect to the network somehow matters. It doesn't. And why is he using a device that ties him to something like a desk?
To borrow a phrase from Cory Doctorow, Bruce Sterling may be an edge case, but he's a leading edge case:
So, where're all my records and CDs? They're inside the laptop. DVD player? Laptop. Newspapers? I read Google News in the morning. Where're my magazines? I read Metropolis Online, I write stories for SciFi.com. Where's my TV? I got no TV: Compared to Web surfing on broadband wireless, watching a TV show is like watching ice melt. [...] Where's my fax machine? Laptop. Mailbox? Laptop. Filing cabinet? Laptop. Working desk? Laptop. Bank? Laptop. Place of business? Laptop. Most people I deal with have no idea I'm here in California. They'd never think to ask me. Why should they? They send e-mail, they get what they want, game over.
isen.blog's F2C: Freedom to Connect conference will be properly wired:
This link will provide (a) streaming audio, (b) high-speed interactivity for everybody in the room, (c) a back channel so in-room participants and remote participants will be able to contribute to a group chat (thanks Greg Elin and Manuel Kiessling).
Could there be an Organic business opportunity here Nikolaj?
Adobe is buying Macromedia. Adobe no longer has any incentive to promote SVG, and since Microsoft jumped ship along time ago on SVG ( in favour of proprietary technologies spearheaded by Longhorn), SVG is probably over and/or will be merged into Flash somehow.
Actually it was Just who thought of it, but the appearance of Google Video Upload combined with the "Infinity + 1" program leads one to believe that Google actually have invented infinite storage...
The very old assertion that collaborative spam filtering is not really what you want - because some people oddly enough like some kinds of spam - rings particularly true here at classy.dk headquarters currently. GMail spam filtering is beginning to systematically allow various kinds of bogus stock tips as ham instead of spam. I can't think of any good reason for this other than the possibility that many people actually grade these messages as ham. They are as repetitive as other kinds of spam and don't appear to be particularly well crafted to beat filters And furthermore stock tips are the only spam messages I am still getting. No xanax or vicodin offers make it through.
"Note that these charts are actually Flash, so you can right-click and zoom in on the details of the graph". How nice. And Flash is vector graphics, so the graphs is sharp and crisp at all zoom levels.
Sidenote: Is Sifry subliminally perpetuating the "liberal mainstream media" myth, by listing mainstream media as blue and "citizens media" as red in this graph?
Amazon has a new interesting feature based on the full etxt data they have because of the Search Inside feature. They show you the phrases from a particular book that are statistically improbable, i.e. standout phrases, phrases that are unique to a particular book. This is very useful, I'm not sure it's surprisingly useful, but it's certainly useful.
I am reminded of an IBM research paper on hierarchical bayesian categorization which used similar ideas to obtain useful hierarchical categories of documents. Since I read that paper I've been wondering when we would see this applied in the real world, but no search engine seems to have emerged from the IBM project.
Oddly related projects: Technorati "related" tags and by extension, applications of Yahoo's term extraction service - this is like open sourcing the context algorithsm underlying e.g. Adsense.
Google has begun actually parsing the language on web pages to be able to answer simple factual questions by quoting webpages. As an example here's the population of Denmark. I think Peter Norvig gave a talk on this at some recent O'Reilly conference where he talked about using Google's vast amounts of data to beat traditional AI approaches to language parsing.
If my guess on how this works is correct, then Google's approach rhymes perfectly with some of my own, sadly unrealized, ideas on how to build parsers and also with Jeff Hawkins' ideas on brain function as described in On Intelligence. What's interesting is to use our sensory experience to build good prediction models for further sensory experience. In Google's case the sensory data consists of text utterances on millions of web pages. Google only lists one answer to a question, but I would be very surprised if these weren't in fact just the most likely answers based on statistics derived from googles database of text data.
In short, I'd be surprised if this was just Google's own implementation of Googlism.
Some of the less pretty results - e.g. the answer to the question Who is Jane Fonda also indicate that Google is actually storing these autogenerated assertions as metadata (i.e. maintaining an "Is" property for the string "Jane Fonda"). My guess is that such a database of actual utterances is your best shot at any good model of language and reasoning. Obviously you want to add structure to your model - but this too has to be based on the statistics of actual utterances.
If I was a stockholder in Cycorp - a company busy building a basic database of this kind of knowledge manually - I would be trying to get out of that investment. The odds of succeeding by parsing actual utterances in the metadata format natural to us (language) should be much higher than the odds of succeeding in doing this manually.
The latest Google-bashing meme revolves around the idea that Google is loosing. It's over. Yahoo is here. A9 is here. (Funnily, nobody seems to mention "MSN is here" in that sentence)
I doubt very much that it's true except in the sense "Search will not be the monoculture that the world of operating systems has become". There are two things in this I'm curious about. First: We all agree this is a Good Thing, right? No monopolistic lock in. Lot's of wellbehaved competitors, because the audience is fickle. Intense R&D to survive. Second: If this is good - why are people giving the message "The world of search regains balance" the spin "Google is over"? Is it just the standard modern day negative "critical thinking" or are people so used to succesful monoculture that they actually prefer these monocultures to live, thrwing, competitive environments?
The world's first barcode scanned supermarket purchase was a 10-pak of Juicy Fruit Gum, sold in the Marsh Supermaket in Troy, Ohio on June 26, 1974. Interestingly, it took 25 years from the invention of the barcode before this first application in retail.
So, danish libraries are purchasing robots to handle book returns. These robots only do coarsegrained handling, since bookshelves are a little to delicate for robot fingers at the moment. However work is under way in labs at Johns Hopkins and University Jaume I in Spain. At Sonoma State University, they solved the problem by making the library storage room more robot friendly (vendor, Wired story)
Library Journal has a story of just one library that's shipping 50000 volumes per day to patrons, and obviously needs Amazon style automation to do it. The same automation is available in a home edition on the Mac.
Jon Udell once again is out front in promoting light weight integration and unintended consequences as he demonstrates how to annotate Google maps with personal bookmarks. This is mindblowing in case you didn't notice. When Google gets done with Google maps (i.e. includes the rest of the world), and when these hacking toolkits gets properly polished, this means we we will have an always on worldwide resource for geolocating web resources in high quality zoomable, scrollable maps.
It's not Udell's own hack - but his promotion of this hack is of course invaluable. He makes two good comments:
On GPS in everything: "I never quite understood what we were supposed to do with these. But now it's clear int he future millions of us will be running around annotating the world around us."
On the fact that the hack works at all: " Was this all just an unintended consequence, or does Google really get that the future is services co-created by the people who use them? The latter, I hope, but we'll soon see."
I recently bought Paul Graham's Hackers and Painters. I've read the essays on his website, but I figured the paper version might have updates, and no matter what books are nice.
At least one thing is new in the book. In chapter 5, the paper version of this essay about web applications beating out rich applications in the long term Graham talks about Windows in general and Bill Gates in particular (The title obviously a play on Gates' book "The road ahead").
The front notes for the book notes laconically that, after reviweing the essay, Microsoft's PR firm did not feel like providing a press photo of Gates. Instead Graham chose this mugshot of Gates, taken in Gates' youth after he was arrested on some traffic violation.
Yup, it's the new frontier search rich media transcripts with Google. The content, sadly, is mainly television and not e.g. scripts of movies. I was gunning for a "The Usual Suspects" reference with my search, since the line "Elvis has left the building is used in that film (by Stephen Baldwin). Right now it seems the results are mainly useful for Johnny Carson obituaries.
Actually I don't think Google got here first - Yahoo got it in December. And Yahoo has cool RSS media feed content submission.
I think I have bought my last HP printer (a pity, they're better). HP are region coding printer ink. How ironic. HP is a globalized company, presumably making more money from sourcing their purchases wherever they're cheapest, but you, their customer, is not allowed to source your HP products where they are cheapest. Now we're just waiting for the first DMCA lawsuits aginst people breaking the region coding system.
Like the boingboing chant goes: May 1,000,000 hardware hackers descend on your tent. (more boing boing coverage)
(Funny side note on the WSJ article's mention of DVD region coding: It says that "many manufacturers" make region ignorant DVD players. That's the understatement of the year. You might be able to buy an overpriced region blocked Sony (they're also a media company after all), but you'd be hard pressed to find a cheap DVD player that isn't region ignorant)
The blogging world is verily humming from the sudden excitement over the growing URL tagging phenomenon. By that I mean both the growing use of tags via del.icio.us and tags on Flickr but also the clever connection to the lower case semantic web, in this case the growing use of the "rel attribute" microformat. For evidence, just Google "rel attribute" (OK, I threw in Technorati to get good hits - because they're among the microformat champions). ((Obviously I would have liked to search among tags for the words "rel attribute" and then have given you a link to that insted but Google is still the best search)).
It seems to me that the techno sapient blog audience is particularly fast at picking up technologies like this. The defining quality enabling this is that no service provider owns the content on blogs. So none of the providers have the ability to lock in any kind of useful linking technology, they have no option but to cooperate - or they will soon become irrevelant as the 1 million army of implementers (i.e. the bloggers) choose not to use their standard. It helps that there is friendly cooperation among the service providers (everybody has their own niche, so they can afford to collaborate). Conversely, news travel really fast on blogs, so implementation news, new possibilities spread very fast. And once again lock-in is not really an option for tool vendors, so they need to keep up, which keeps the pace up.
This isn't lower case. It's huge.
As a side note to the previous post, dig the Google search for Goffman, Google Print is in beta. Color me clueless, but I hadn't noticed that before in my searches (I obviously read the announcement). It works exactly like Amazon's 'search inside' feature, but there's the great advantage to Google Print that it does not require a clumsy credit card approval to work (something I could never get to work on Amazon. They didn't like my card).
Google doesn't have Amazons connection to booksellers, but they have a good connection to academia, and now Google will start digitising university libraries, starting with Harvard's library and continuing with Oxford, Stanford, the University of Michigan, and the New York Public Library. There will be onine access to works in the public domain, but obviously the advantages of that is much, much less than it could be because of insane copyright extension legislation and because Lessig lost Eldred. How sad.
Obviously an initiative like this doesn't work very well for closed source (technology dead) material, and on the other hand this material presents a major upgrade to all the material that is available, so maybe this is a way to show owners of technology dead material how wrong they are in keeping their material out of the hands of potential users.
There so much money in cocaine that Columbian drug barons are applying the same sophisticated biotech techniques to their illegal coca crops that regular farmers are applying to corn, grain, and tomatoes. Specifically, Columbian police have identified genetically modified Coca-plants that produce up to 8 times the cocain of regular cocaplants. In addition the new coca-plant is much more robust than the natural original. One wonders what bizarre genes have been introduced into the wild in this way...
No, The Governator will not be emigrating to Iraq, but Wired has a story on the the latest in remote controlled military hardware: Robots with guns. Ground combat without casualties (on the American side). The machines look a lot like the war machines in the futuristic sections of the Terminator film series, although they're a lot smaller.
A shipment of the Talon robots are to be deployed to Iraq for spring 2005 for actual combat use. Is this the T1, predecessor of the Arnold machines?
(Nr 3 in a series of lives imitating art)
[UPDATE: The analogy to the Terminator films wouldn't be complete without this story on self-reconfiguring (i.e. shape morphing) robots]
Weiser's Principles of Inventing Socially Dangerous Technology:
1. Build it as safe as you can, and build into it all the safeguards to personal values that you can imagine.2. Tell the world at large that you are doing something dangerous.
Jon Udell reviews MSH - Microsoft Shell which had adapted the old unix "little tools" philosophy to an object oriented world. Tools emit object, and all kinds of adapters are ready to massage those objects in a style similar to the old unix way. Default adapters turn objects into columned output apparently...
Clueless browser review:
But the bottom line is that at the moment there isn't a browser that is:1/ Free
2/ Has tabbed Browsing
3/ Offers the extensibility of Firefox, but without the 'clunky' geeky feel and problems.
4/ Is stable, loads pages correctly and isn't a hackers dream.
What problems? Firefox just works. (incidentally, my site stats give a Firefox market share above 20% at the moment - it's 20% cumulated over 2 years and growing fast)
Since the reviewer is looking to Google to recitfy that problem it seems this is a no-cash version of The Money Illusion which in this case goes like this "It can only be good if it is made by someone with a lot of money" - The Google brand is seen as inherently good. The free Firefox, since it's just made by people, can't be good.
Also on boingboing, a disturbing image of a voting-machine error screen.
This reported by concerned geeks who also reported other voting problems. More voting news here. (Best comment: "Long time bitcher, first time voter.")
The brain is a magical apparatus, but easily fooled. Take these low-tech 3D images from Burning Man. By shooting stereoimages (i.e. two images taken simultaneously through lenses next to each other at a distance similar to eye distance) and displaying them, shifting rapidly between left and right image you get a much better experience of depth than you would expect from such a simple minded approach. I would love to see this applied to film. You could do that intelligently, using simultaneous panning and left/right shifts to get the 3D effect without quite so much of a wiggling sensation.
As reported in Wired, the device TV-B-Gone (copy of website here) is like an anti-TV handgrenade. When activated, the device will send a minutes worth of different shut down signales to televisions in the vicinity. It knows most every major make of television.
The Wired piece recounts the story of a succesful field test at a local Best Buy, where all the televisons in the store turned off one after another after one click on the device.
This reminds of that geek fantasy, the cell-phone jammer, but obviously this device is much less harmful. After all you can't call the emergency line on your television.
There's a reason us geeks still like PCs over other media devices: Having hackable general purpose networked devices make rapid innovation possible. Witness the buzzword right now, Podcasting. What this is is simply applying RSS to MP3s, offlining audio in the way text got offlined. RSS 2 is smart enough at this that there's an emerging standard for how this is done, i.e. automated, early entries in this new software category is iPodder. So iPod is audio TiVO on an open network.
I'm not that big on radio in the first place, so I'm not really the target market for this kind of thing, but obviously I'll need to have a look. Please remember though the truth of the audio blogging manifest.
Weblogs, GMail, Amazon web services (I should probably continue by naming e.g. salesforce.com, but I don't know anything about that so I won't) etc. According to Tim O'Reilly these are all examples of a new maturing conception of the web. The web is growing in perceived capability very rapidly these days, and consequently people are moving more work to the web. Its certainly true that the variety of data consumed over HTTP is growing rapidly. So is loose integration of helper services consuming and processing this data, e.g. feedburner, bloglines, and technorati when it comes to blogging infrastructure.
Meanwhile from the outside, the whole WS-* process still looks like a moneyhole to me. But I'm strictly on the outside. I think that's also Loosely Coupled's point here. The simple, easy to integrate, non "enterprise computing" services are winning at this point, because they're there and usable if not optimal. It's de-facto REST in most cases and REST by principle elsewhere that's getting used by most clients. The web service revolution isn't in the future. It was yesterday.
This kind of attack from below is pure Innovators Dilemma. The road ahead sketched by Phil Wainewright is too: The simple, but good enough, services will eventually mature to the point where they will be useful for everybody. If the big players keep insisting on firming up heavy, hard to implement, enterprise models of web services, then maybe I was wrong back when I said that the incumbents get it.
The amazing Blue Gene project from IBM has yielded another reward, by reclaiming the supercomputing throne from NEC's Earth Simulator.
The Blue Gene architecture is still developing and is supposed to be 10 times as powerful as today as soon as 2005. The Earth Simulator was able to hold on to the throne for 2 years at a multiple over previous systems of only 3-4 so the final Blue Gene machine could have a very long life as the fastest on earth.
I just searched (without any luck) for the Ctrl button on my TV remote. That's how addictive Mozilla Firefox tabbed browsing is.
I really need tabbed TV surfing too. My TV does not have teletext/videotext memory, so I loose videotext browsing state on each channel change. That's really annoying, since videotext updates are so slow that I always switch to image while waiting for the weatherreport or the latest scores. I only have channels with nothing good on, så I always change the channel having forgotten about the videotext loading and am left informationless.
Here's an example of the kind of material you can do in a webcast that's hard to do in broadcast television.
In this interview Nassim Taleb talks about risk and about how most investors underestimate the importance of rare events. A point along the same lines as those made by Benoit Mandelbrot in his recent book.
What's interesting about the interview is that it is long and I would say quite technical really, even if Taleb does not actually throw equations around.
Since distribution cost dominates production cost, that kind of limited audience interviews make little sense for broadcast.
As we immerse ourselves deeper and deeper in a knowledge society, where value comes from specialization, i.e. limited knowledge, we should find most of our interests have this quality: They're much more suited for the world of ends than for anything else. What's disconcerting for broadcasters is that this kind of specialization is even true for entertainment. I read somewhere recently that kids simply aren't watching television anymore, they're playing networked games.
Remember the scene from Minority Report where Tom Cruise is on the run through a shopping mall, and all the signs interact with him because they can biosense his identity? It is a very effective envisioning of an intrusive database society of the future.
While the personal identification is some time off, you can get pretty close even today with Human Locator. A camera/software combination picks out the body shapes and positions of people in front of e.g. a store window and integrates with computer generated window displays so they can react to street movement outside:
The Human Locator analyses a camera feed in real time, sending detailed information about people's location, size, and movements. This data is then used as input to control projections, video, graphic animations, and sound.
Such an obvious concept that I and everyone else who is truly geeky thought of it several years ago.
Pick-and-Drop is an extended concept of the commonly used drag-and-drop. With this technique, a user picks up an object on one computer display with a stylus, then drop it on a (possibly different) computer display. For example, a user can select or create a text on one's own PDA and pick-and-drop it at the desired location on the whiteboard. From the implementation point of view, the data is transferred through the network, but from the user-interface point of view, this technique allows a user to pick up digital data as if it were a physical object.
Some implementation details here.
Fast, cheap, permanent, dense, energy-lean - those are the five requirements we have for the memory devices in our PC's and there's an upstart promising to do all five with carbon nanotubes. I have absolutely no clue if this story is vapor or in fact solid carbon news, but the dream is a nice one.
There is no triviality test for patents. If its new its patentable - witness the doubleclick patent.
As some of the comments here point out, there are interesting specifics to the patent: It is specifically about using simple input devices in simple equipment and using input timing patterns as meaningful input. That's an actual interface that you need to carefully develop - but it's still a problem that patents are issued with very broad characterizations of the invention and consequently overly broad applicability.
It's easy to think that the ZDnet story on new terabyte GMail mailboxes is a hoax, but no, it is not as witnessed this blog post. Sadly, the terabyte allocation seems to be easily revoked if you try to actually use GMail as a file server.
10 days ago Just struck up a conversation with me over MS Messenger from the pedestrian street in Copenhagen via somebodys open Wifi AP. Today, my good friend Martin got in touch from the bank of the river Thames, only via a PDA/Bluetooth/Cell phone/GPRS combo. Not only does this feed my gadget envy, but it simply must be a sign of sorts. Maybe the fact that all they said to me was "Look, Claus - no wires!!" is another sign of sorts.
I have mixed feelings about Gmail. Technically Gmail does nothing that isn't already widely deployed in handling email - on the other hand Gmail threatens to take ad personalization to the level of cold reading.
Tim O'Reilly sums up some reasons why the Gmail privacy backlash is bogus - and the basic argument is summarized early
There are already hundreds of millions of users of hosted mail services at AOL, Hotmail, MSN, and Yahoo! These services routinely scan all mail for viruses and spam. Despite the claims of critics, I don't see that the kind of automated text scanning that Google would need to do to insert context-sensitive ads is all that different from the kind of automated text scanning that is used to detect spam. [...] Google doesn't have humans reading this mail; it has programs reading them. Yes, Google could instruct a program to mine the stored email for confidential information. But so could Yahoo! or AOL or MSN today.
Like Doc Searls did in public, we talked at the office about Google's new managed searchable email service GMail as an aprils fool prank - but I did a quick harddisk price computation and figured that the kings of terabyte text handling could probably handle a gig of email per customer also at very low cost. But Doc Searls got to be the guy who publicly fell for the realprank: Which was publishing a real but unbelievabe story on April 1st. There's no question it was a setup either: The press release is informal and tongue in cheek (who writes "Heck, Yeah." in press release titles) so there's no question this was an inverted prank and extremely good PR work. Go to Shellen.com for the insider story and screenshots. They seem to include Google Groups style message threading for your GMail.
On a more serious note: GMail is interesting as an escalation in the Google/Microsoft battle for the desktop that is about to begin for real. Microsoft has been doing vaporware marketing for a new search engine for a long time now and GMail looks a lot like Google's counteroffensive. This is the battle of the PC and the 1000 developer rich app against the agile, serverside, modern language, network centric app. Or at least I think Tim O'Reilly would agree that it is.
Some technical things I want to know:
Does GMail interact with standard mail protocols så that I can integrate it into my usual mail applications: IMAP, POP, SMTP? (The FAQ currently says no - which is a shame)
Will the Google search and organisation services extending standard email be available via a web service interface for deeper integration with the desktop?
The XCP protocol is hopefully an april fools joke.
Making the blog rounds is the plans for the worlds first randomly assembled supercomputer. Randomly assembled meaning in this case that it will be made up of just the machines of the people who show up.
This presents interesting ideas for a lot of other venues: Why don't some airport sponsor a flash mob computing event: Provide the airport WiFi link as the networking infrastructure for free. Or how about The Cinnamon Decaf Soymilk Extra Grande Moccachino Computer - made of Starbucks patron computers all over the world. Donate the time to SETI and give away free muffins with every detected radio spike.
The 802 series of wireless data protocols is about to get a new member dubbed WiMAX. The key point about the new standard is dramatically improved reach to the point where covering cities with fast wireless becomes very simple.
3G is doomed. Telephone companies based on a "voice" business model are doomed. Telephony that does not just travel on the open worldwide data network is doomed.
Microsoft's iPod killer sounds very much like an N-gage style failure:
The Creative player weighs in at 330 grams (11.5 ounces) -- roughly three times as thick as an iPod and roughly twice as long to accommodate its television-quality color screen.
An interesting graphical comparison of the top 100 pages returned for a search on Yahoo and Google. The mixing is brutal: Some top 10 pages on one engine only make it into the top 70 on the other. The first page on Yahoo that is not in Google's top 100 is number 10. The first page in Google's top 100 that is not in Yahoo's is nr 9. Do search engine optimizers gete paid by the engine or by the traffic improvement by the way?
(via David Weinberger)
We all know about the problems of the Mars Rovers, but it turns out that driving a robot on Earth isn't that simple either. The problem of control is quite simply immense even for simple easily goal defined tasks like going from A to B. As long as unknown subgoals are the main feature of a problem, current robotics has no answer.
The OQO, super compact full function portable computer, a vaporware perennial, has had a site update. The old "Will be available in 2002" line is now a "Will be available falll 2004", and there's a snazzy video that looks like a "Mission Impossible 3" promo, complete with motorcycles and 60's nostalgia - in this case the nostalgia of old bulky computers.
My cell phone died some time ago (please stop calling until I get a new one) and I just today fixed a requirement for its replacement in addition to the PIM synchronization : It must run perl.
Obviously traffic is less intense than it might be, but the Technorati beta is really fast. Sub second query times is essential for that google-like blog surfing fun.
FCC Chairman Michael Powell concedes plain old telephony is over, as reported on isen.blog. The undoer is easy to use PC based voice IM. It's not quite VoIP, since ot does not integrate with POTS, but it sounds good and it's available to a lot of people.
It is worthwhile pointing out that the winners are application that do seemingly pure point to point connections oveer the general internetwork ether. Due to the failure of ISPs to deliver true end to end to a majority of connected devices the applications have to fake the pure end to end but that's hopefully a temporary condition.
Usually Vannevar Bush's idea of the Memex is credited as the earliest concrete envisioning of the Web. But an article on Boxes and Arrows suggests an earlier example, namely that of Belgian man of letters Paul Otlet who envisioned a device remarkably similar to the Memex - but in 1934, years before publication of "As We May Think" in 1945.
One has to be careful in these claims though. First of all - Bush's Memex is more important, since it was a direct influence on later inventions of hypertext and not just a similar idea. Secondly, one shouldn't get too carried a way. The notion of 'universal systems' and classification is much older. Immediate forefathers may be found in the works of the Enlightenment. In fact during the Enlightenment it was the central concept. The Encyclopédie of Diderot is an attempt to construct such a 'universal book', Leibniz dreamed of languages of universals to describe all knowledge and automated discovery of new ideas, and Carl vno Linné founded the study of taxonomy.
You can look further, back to Greek philosophy and imperial inventories of Egypt and China. The urge to classify, collect and compare runs very deep.
Tim Bray mentions a Wired essay by Neal Stephenson on the laying of all the cable to ferry the information that we are so dependent on. I didn't remember reading it and a quick check of my "past issues of Wired" collection revelas that the closest to issue 4.12 I could get was 4.11 (The cover of which asks of failed boom time contender Marimba "The Next Netscape?" (both companies ultimate failed but still exist in much reduced form, so now we know the answer: Yes! Marimba was the next Netscape, just not in the sprit intended)) - an issue containing the great "Greetings from Burning Man" essay by Bruce Sterling. Wired was having a good year, obviously.
Enough with the past issue dropping. The bottom line is that I am looking forward to this as much as I looked forward to reading the now classic In the Beginning was the Command Line.
The previously specified compression and power efficiency of the IBM Blue Gene project (previous mention here, here, and here) has now been realised albeit in a small scale system. Of particular interest is the fact that the new IBM prototype is as fast as the fastest machine in Denmark. The IBM fills up half a rack. The Danish Linux cluster consists of 500 PC's. Truly a remarkable difference, and evidence that the age of custom high performance hardware is not nearly over yet.
Over a year ago I posted a few comments about the pricing and power consumption of supercomputers and later I compared that to the power of and power use of the brain.
Now Wired News reports on a new chip providing 25 gflops at 3W power consumption. That translates into an ultralow 0.12MW consumption per petaflop. That figure is misleading since the power consumption of memory systems etc. is not included, but it is still impressive, since e.g. the IBM Blue Gene machine is expected to consume 2MW (which is a comparatively low rate of power consumption as supercomputers go). I don't really know how the power consumption of the usual supercomputer is distributed between the computational core and the memory system.
In terms of number of processors it also represents about a tenfold increace in the power per CPU compared to a regular PC of about a year ago, as that expressed itself in the theoretical computational speed of clusters like The Horseshoe.
We're still a while away from brain level efficiency.
As I write this, this Italian language news story is Google News' top pick in the technology category....
The indefatigable Just has also been reading Don Norman, as is evidenced in his keyboard rant micro-play wherein a variety of input devices get together in the text lounge for a debate. I'm not so sure about the condemnation of Fastap - one needs to feel it with fingers to judge - but that doesn't detract in any way from the keyboard conversation.
Donald Norman's attack on the PC and the technology mindset it propels is interesting reading, even if we early adopters might feel that the PC is a little better at what it does than Norman would have us believe.
One of the annoying things in the book is that Norman does not consider the possibility that maybe GUI's could be more like appliances. Maybe the PC as such is viable as appliance infrastructure, it's just the bulky applications that have matured on the platform that aren't delivering on usability. In fact there is an interesting story (supporting Norman's theories on technology development drivers) to be told in that regard. When I started reading computer magazines, the object oriented architecture of GUI's was still up for grabs. The battle was over Microsofts OLE technology or the OpenDoc technology sponsored by IBM and Apple. Both technologies address the composition of complex documents from parts, but the way the story is usually told the two technologies differed in philosophy exactly by being tool-centric (OLE) and document-centric (OpenDoc) respectively. The reason for Microsoft's tool focus was exactly according to Normans argument: The monolithic apps provides Microsoft with a lock-in strategy, which further enhances the market dominance of Microsoft.
So the PC and its GUI could have been an appliance assembly platform, but due to the nature of technological competition that just didn't happen.
Another reason why the book is interesting reading is that it is from 1998. As I was reading the book and the many dreams of a better way, I felt a slight disconnect with the book even though I considered it contemporary. But 5 years is a lot of time in technologyland. Many of the developments in low price displays and better batteries that Norman talks about have happened, and indeed the devices are beginning to appear. Sofar they have made hardly a dent in the PC infrastructure, but that of course is in the nature of disruptions. Apple is certainly beginning to see a complete change into a content and appliance company via the iPod - a development foreseen with great precision by Norman.
And finally, on reading the book it is crystal clear what the deal is with usability and cognition/behavioural science. For that reason also it is recommended.
Robert Scoble is (of course) on the anti-ECHO team.
Who is gonna be the Dave Winer (er, evangelist) who forces Google to implement Echo the same way that IBM implements it? Or, who is gonna be the Dave Winer (er, evangelist) who forces Microsoft to implement Echo the same way that Sun does it? Or, who is gonna be the Dave Winer(er, evangelist) who forces Moveable Type to implement Echo the same way that Google does it?
I wonder who is the SMTP evangelist at IBM/SUN/Microsoft? IS it one and the same, or is it just that they actually DO care about interop? Who is the TCP evangelist? Who is the HTTP evangelist? Good specs for important software do not necessarily require a sponsor even if they require an editor.
Radio is about to change for the better as cell-phone 'software radio' chipsets start to ship
Sandbridge's chips create chameleon-like radios for cell phones capable of changing from one interoperable wireless standard to the next. The radios flip among Code Division Multiple Access (CDMA), Global Systems for Mobile Communications (GSM), and any of a clutch of other wireless standards using either software stored in the phone or downloaded over the air.
Apple just released the baddest Power Mac G5 - the first major 64 bit desktop system around. You can but a truly absurd dual processor system with up to 8GB of RAM if you have some data rich number crunching to do or if you just suffer from speclust.
All that, and it runs GNU tools a a beatiful graphical front... The day when I buy my first mac is getting closer.
SCO brings forward - with great energy admittedly - claims on Linux source being under SCO copyright. The reaction from Gartner Group: "Hold off Linux purchase". Meanwhile all observers agree that they will have a hard time proving any wrongdoing in court. When they actually produce evidence of wrongdoing, which they haven't yet.
Oracle makes a hostile bid for PeopleSoft . Meanwhile everybody agrees the offer is too low and that it was mainly put forward to disturb PeopleSoft's acquisition of JD Edwards. The media reaction : "Postpone your ERP purchase".
Is it really that easy to destory the profits of a company? Are the media companies and analysts prudently worried or gutlessly agitated?
Dan Gillmor informs us that IP telephony is big in japan - the Early Adopter Country:
Soundbite: "...she uses an Internet telephone, as do her parents. And so do more than 2 million other people in Japan."
So VoIP is finally eleminating old phone service (which will hopefully end up producing a better dialing interface than phone numbers), and WiFi is set to stun everyone between right now and the next year or so. If Nikolaj has anything to say on the matter WiFi will be a transparently available commodity in urban centers Real Soon Now. Then we just need a WiFone and mobile phone companies will start to look like a bad idea. In particular the outrageous pricing schedule for data traffic on GPRS networks will look like a very bad idea. Talk about blowing momentum.
And of course the really scary thing is that I am sooo late in reporting this story. In places that get it (US metropolitan areas) it is already there.
As mentioned, standards come about because somebody decides to adopt a practice found elsewhere to make their own informaton 'tool compatible'. Case in point, Jakob's Law of the Internet User Experience
Users spend most of their time on other sites. This means that users prefer your site to work the same way as all the other sites they already know.
I found the quote as part of a nice collection of theories of the week. Good idea, wide range of knowledge presented, well executed. Nice clean website.
Like the logo.
Previous reports on the Google cluster were pretty glossy - but now there's a detailed look at the cluster in IEEE Micro. The most impressive thing about the article is the level of detail of the optimization work done on the Google cluster. This is true engineering. Everything is measured and accounted for. No feature of the system is unanalyzed - from overall architecture and systems management automation down to consideration of what type of microprocessor kernel offers the best price/performance for search.
And the impressive thing is of course what is in the headline: All of this work goes into making the service as simple and fast as possible. In a truly golden performance, effort shouldn't show at all.
The commercial pressure on the 'free' internet is mounting. Lately Neustar/Neulevel has been trying out some DNS changes so that available names lead you to a search page. That's right - the hideous 'search MSN' feature of IE built right into the DNS. This is a hideous and completely unnecessary hack. At least with the IE MSN search I have the opportunity of turning it off. Turning off default DNS will be considerably more challenging.
The commercial reasoning is obvious, but if the notion of public space is ever to mean anything on the Internet, this kind of wholesale commercialization will have to be stopped. One would hope ICANN would step in in the case of .biz
Wired News is asking the same question I did in a previous post: Why Did Google Want Blogger?.
The focus here in on PageRank for RSS. One would imagine something like PageRank extensions for RSS even, i.e. letting the XML feed itself carry the kind of data that e.g. Moveable Type TrackBack enables.
I just remembered what Yahoo's recent acquisition of Inktomi reminds me of: It's like having bought Western Union to compete with the recently founded Bell Company. No amount of marketing muscle or corporate powerdealing can undo the superiority of Googles search.
The new keyboard browsing features of the mozilla browser are great. Using the keyboard you can key through anchor text to any link, indeed to any text rendered on a page. This is the closest I have seen to an anti-mouse, and pretty ground breaking. Code editors should be that good.
All I need now is eye-focusing to do the kind of rudimentary focal shifts that are useful and the mouse could be a thing of the past.
Update : I just need an Eye Mouse with the anti-mouse to be rocking.
It seems that during the current depression the 'natural laws' of magical price/performance gains for technology are still holding true
News about IBM's supercomputers quote a price of 290 mio$ for two machines - one 100 teraflop machine and one 360 teraflop machine. The previous mention of BlueGene/L had it listed as a 200 teraflop machine, not 360 teraflops, but the price was listed then as 100 mio$. Assuming that the relative timing of the two machines means that the cost for BlueGene is only two thirds of the full price, that's 200 mio$ for 360 teraflops, or 0.55 mio$ per teraflop when delivered. This isn't too far from the estimate of a year ago, although that estimate had an earlier delivery date.
At Loosely Coupled a story is made about some naive quotes on emergent intelligence in the distributed information infrastructure of the web. It is well known that all claims up until today about AI have been wrong, certainly when it comes to establishing a timeline, but I think it is important to point out that the analogy with Pasteurs nihil ex nihil experiment - which established the importance of the cell - is flawed. Pasteurs findings are well established, it's just the analogy that is wrong. Thw author, Phil Wainewright, is forgetting that intelligence actually did spring from nothing, it is just not an everyday occurence but but took all of our evolutionary history to take place. The real question therefore becomes whether or not Phil Wainewright is actually a creationist?
So: No, wiring together all of todays computers will not - not even by duplicating them a millon times - produce intelligence. But seeing as 1) we're actually capable of injecting design into the process of building the future network and 2) knowing also that intelligence and other very advanced structural organization of information (like e.g. the cell) can in fact be made without a conscious designer, I think the score comes out two points in favour of realising actual intelligence in a distributed computer environment in the future.
Japanese telecommunications companies ave developed what can best be described as a Computer-Human-Human-Computer interface according to New Scientist.
Using the physical connect of a good old fashiond handshake, the report states that data transfer rated of 10 Mbit are possible - without discomfort to the transmitter one would hope.
This is one of those really old school science fiction concepts that have been around for so long that all reports on it finally being possible are immediately considered unbelievable, because the story is just too good. So one would hope that New Scientist has done some solid factchecking.
A reader of classy.dk proposes a risk index for business proposals as Investment/(Experience * PitchLength). The Delmore Effect suggests that it could be that a lengthier proposal is in fact an indicator for less desire to win in the proposer. So risk may in fact be greater if the pitch is longer.
But of course length of pitch could be an indicator of a poorly articulated strong desire (i.e. anxiety preventing the proposer from getting to the point) as well as a well-articulated weak desire (Jargon-heavy detailed analysis of what-ifs that would make sense one year into the businessplan but not in getting the ball rolling)
Found some notes on the E language (I don't know if the Ecstasy reference is deliberate).
It seems to be a good attempt at distributed personal computing - in fact what digital identities should enable. I like in particular the safe, private methods of dispatch. It remains IMHO to see if distributed codeis really the way to play. Personally I think located code and distributable data is the way to play, and I think all successful internet protocols are evidence in favor of my position. They almost never specify API's for distribution but only data formats.
My ideal notion of distributed information would not be distributed algorithms, but distirbuted data. The date would however be live in that it would have active and selective security policies applied, so it would be implemented as objectt, but they would only have accessors/properties not functions defined.
There used to be a time when a group of people organized around a good idea immediately formed a society or foundation to propagate the good cause. These days they form companies.
A new Danish networking upstart, Organic, will build a wireless broadband network, street by street, caf? by caf?, throughout (one would guess) Copenhagen. The businessplan sounds like it was stolen from Sky Dayton's Boingo but that of course remains to be seen when actual products arrive from the company.
Just when you thought Linux-powered palmtops were the coolest, most geeky portable computing devices available (Java phones appear not to be happening just yet in reasonable quality), you learn about theSony PCG-GT3/K. A full-featured portable with a built in camera - a real one, not a webcam - with 30 Gigs of harddisk space but unfortunately powered by the evil OS from Redmond. 17 hrs of battery time also! This MUST be the baddest portable computing device on the planet right now.
Judging from the choice of OS (2000 or ME) this is really old news btw.
Yes, there are many speed-posts but IBM now plans a TeraFlop on a chip for multimedia processing. That will be something to see. Presumeably a chip like that would operate almost exclusively on cache memory to accomplish these speeds.
My favorite supercomputer of course the Google cluster...
The Google cluster could very well be the largest commodity cluster, and the story above (as well as this one) has some interesting data on the kind of things you need to do to manage thousands of commodity servers in a clustered environment (as discussed previously).
Hmmm, the danish fuel-cell advocacy website brintbiler.dk thinks that the keyword 'management' was a good one to purchase on Google Adwords. I found out trying to find a link for
Either they weren't thinking when they purchased, or Google is not in proper working order, or there are hitherto unknown connections between component programmign and green high-tech energy.
The University at Odense (in Denmark) just setup a new supercomputing cluster built entirely on commodity hardware.
It is worthwhile to compare this to the supercomputers built with 'rocket-grade' technology, i.e. the fast Crays and the IBM peta-flop machine.
The price performance ratio of the commodity system cannot be beat of course. But The Horseshoe as the commodity system is dubbed is close to the maximum considerable on commodity hardware with 512 PC's in parallel.
By comparison the top crays will be 20 times as fast (servicing 10K PCs would not be fun) and the IBM machine 50 times faster again (servicing 500K PCs is decidedly non-funny) and as to power consumption - the PC solutions is on a level with the Crays (based on reported price of power the consumption for the full cluster is approximately 73KW (1.7 MDKK for three years continued use at 1.2 per KWh comes out at about 73KW) with the IBM solution coming in at much lower consumption rates (2MW per Peta flop i.e. 4KW for 2 TeraFlops). The heat from the PC's will take quite a cooling system. And scaling the cooling system without doing something like the Cray or IBM solution.
So the IBM technology would save 0.5 MDKK per year on power alone.
In this article, a member of the Danish system team declares the supercomputer "dead" - but from the industrial scale environment you would need to invest in to scale the commodity solution, I'm guessing that if you really need top speed, they are far from dead.
Supercomputers are also clustered systems, but the power use and sheer physical size of a large scale clustered system should ensure a place for custom hardware one would guess.
UPDATE:
A recent visit to top500.org and links found there gives some nice price and performance quotes for fast systems.
First off, the possibilty of HorseShoe actually reaching 2 TeraFlops on real problems is hypothetical. This performance is the pure processor speed, where memory bandwidth and the price of parallelization is not considered. From the design of the HorseShoe (commodity networking - fast ethernet only). One would guess that you would need an algorithm that works well with very coarse grained parallelism to achieve any performance near the theoretical topspeed. Indeed only the top 23 systems on the worldwide list are rated > 1TFlops at top500. But that test is only a test of LINPACK, so for search problems or simulations it may not be veri significant.
Secondly, pricing: ASCI White had a cost, reportedly, of 110 M$ two years ago. Assuming it could be done at half that rice today we're still way above the price of a commoidty system. On the other hand IBM plans to build Blue Gene (or at least Blue Gene/L) for 100 M$ by 2004. This machine will be rated at 200TFlops, so the price per TFlop will be 0.5 M$. Assuming a drop in price/performance of 2 over the next couple of years also, this is still comparable to the price of a commodity system today. and the x100 scalefactor will be hard to do for the commodity cluster.
So commodity systems can't beat custom systems, even if the processors of almost all the supercomputers are commodity processors.
Cray is developing a 50 GFlop computer (the SV2 mentioned earlier) using a more traditional supercomputing approach. There are some price announcements for Govt. orders for two these systems, but is is unclear whether the reported 19M$ includes a full-sized system.
Reading about GM's Billion-Dollar Bet there are several points of good news about fuel cells:
First off - Fuels cells are no longer found only in German research labs (and of course as we're proud to mention at Risø labs, the danish research institution founded by Niels Bohr and at the forefront of moderne energy research (wind power and fuels cells)).
Secondly, products ARE beginning to appear. As per the usual technolgy adoption path, fuels cells are being tested for real applications, first where price does not matter as much as the inherent benefits of fuels cells : The highest energy density available.
Elsewhere (link TBA) it has been reported that battery sized fuel cells are reaching the market right about now, for remote hard-to-service devices where chemical batteries or other power sources are less than practical.
Now that some of the more radical computer controlled hybrid approaches to power are being to appear on non-weird vehicles, maybe the fuel could actually be next.