Artikel idag i Weekendavisen om videnskabens krise. Titlen mener jeg er forkert. For videnskaben er ikke i en krise. Problemet med videnskaben er efter min mening regionalt og vender den modsatte vej. Videnskaben har den fint - det er verden, der har det dårligt med videnskaben.
Hvad skal det nu betyde? Jo, videnskab er ligeså vigtig nu som den har v?ret f?r - og bliver kun hastigt vigtigere. Med informationsteknologiens fremskridt og stigende sofistikation bliver flere og flere aspekter af hverdagen engineered - det vil sige rationaliserede, målt og optimeret i en eller anden forstand.
Den andel af den voksne befolkning der er i direkte daglig kontakt med systemer og apparater der forlanger en analytisk ingeni?rm?ssig forståelse af brugeren har aldrig v?ret st?rre. Så hvad er krisen, det er den stigende fortr?ngning af at det forholder sig sådan - glimrende udstillet af den nye regering med det ugennemt?nkte og idiotiske angreb på al ekspertise, under banneret 'Smagsdommere!'. Så der er en krise i det man kan kalde den videnskabelige offentlighed. Og den krise er slem nok, og udg?r et alvorligt demokratisk problem.
Men er den krise ikke et rent europ?isk anliggende, måske endda prim?rt et skandinavisk, og måske endda prim?rt et dansk. Hvis jeg skulle g?tte ville jeg påstå at vi i virkeligheden står lige på kanten af (eller måske endda midt i) en humanistisk krise istedet. Den humanistiske krise består i et totalt sammenburd af noget sammenh?ngende idealistisk humanvidenskabeligt bud på nogetsomhelst. Det v?re sig et kultursyn, et politisk id?grundlag, et forskningsm?ssigt, erkendelsesm?ssigt grundvilkår, eller hvad det nu er der skal til for at skabe noget der minder om retning eller udvikling. Midt i sådan en krise kan der kun findes en historie: Nemlig historien om det man ikke er - altså historien om rationaliteten og den onde videnskabsmand.
Jeg er naturligvis pr?get af selv at bruge mere og mere af min tid på arbejde med IT og dermed de (forhåbentlig) pr?cise fag. Men en umiddelbar vurdering af et internationalt kultutmilj? er at det n?sten ikke eksisterer, udover underholdningsindustri. Det virker tit som om at ihvertfald det amerikanske samfund - på trods af voldsramte WTO topm?der - slet ikke l?ngere har en kulturel intelligentsia der spiller nogensomhelst rolle, sådan som man havde indtryk af at den gjorde i 70erne. Og i virkeligheden har man en oplevelse af at vi herhjemme f?lger med i den udvikling, selv efter den store optimismes nylige fald med ?konomisk tilbageslag og 9/11.
Er regeringsskiftets malplacerede brede nedskydning af al viden, som smagsdommeri, bare en politisk man?vre for at komme af med de ganske mange tiltag fra den tidligere regering på at bygge kultur og menneskesyn direkte ind i statsadministationen, og tag man bare ved et uheld, også de analytiske fag med?
Selvf?lgelig foregår der en relativisering, og der er forskellige politiske odds imod sund fornuft - i form af idioter som Svend Heiselberg, der mener man bedre vurderer sandynligheden for flere d?de i trafikken bag rattet end med statistikker i hånden.
Men som hovedsag er problemet, ihvertfald med de hårde videnskaber, at man ikke kan snyde på v?gten. Man kan ikke slå analysen fra fordi det f?les bedre, for så holder tingene op med at virke.
If you want a look at what there is to be afraid of for Humanistic interfaces, take a look at The Dasher Project. The quiet contemplative act of writing on paper with a good pen in a silent room, turned in to an electronic, psychedelic roller-coaster ride. It is cool though.
This whole humanistic intelligence thing is all fine and dandy - provided the new sensory experience of ever present communication impulses does not mean that we end up in an age of continuous partial attention. Neal Stephensons homepage (the link above) really does not want to be disturbed. His homepage is the longest single statement to the effect "Don't call me I'll call you" I have ever seen.
This entire thing about symbols/ideas/imagery reminds me of a talk I once heard the danish sculptor Hein Heinsen give. To put it briefly, Heinsen's approach to his work means there's a fundamental difference for him between sculpture and painting, in that the sculpture is a question of presence and being, whereas the painting is imagery and idea. As most everybody Heinsen believes there's a just too many ideas going around which of course becomes the grounding for working with sculpture. Of course the reality of it is that the sculpture as being is often a stand-in for some other 'real' being, so in fact merely the idea of being! Whereas the painting is often reduced from being an image to just being the traces of the imagining, so in fact more being. Heinsen claimed in all honesty that he was well aware of this flaw in his logic, and that his answer to the whole thing was to make very few sculptures! We should all have that luxury.
In simpler terms we can follow Stephenson and paraphrase
Donald Knuth: Email is a wonderful invention for people who want to be on top of things. I don't want to be on top of things. I want to be on the bottom of things.
The best case yet for web services. I had submitted a HOWTO on configuring a library called epp-rtk for SSL communication. Needless to say I had long since forgotten what I suggested to others that they should do with openssl, but a quick google scan of the words 'claus', 'dahl' and 'openssl' revealed exactly my input as archived by Geocrawler (so thank you too, Geocrawler!) Geocrawler.com - epp-rtk-devel - [Epp-rtk-devel] SSL configuration
Some more indications of the importance of memory bandwidth for the effective power of a computer. In this paper there's some statistics on neuron update speed, transport speed of neuronal information and brain size. It appears that the entire memory of the brain could theoretically be available to every single computation performed - at least as far as raw speed is concerned.
Also the latest Crays mentioned below localize the memory subsystems close to the processor. In the design for IBM's
Blue Gene machine the entire architecture of the machine will use merged DRAM embedding DRAM completely in the processor core (about 8 MB per processor) This may not sound like a lot, but according to the IBM scientists it suffices for the code they need to run.
On another note - as an aside to the mention of the latest Crays : This machine will pack a whopping 2^15 processor nodes - 31 times the Cray. Each node will yield 32 GFlops (3 times the Cray nodes I believe) for a total of one peta-flop i.e. 10^15 Flops. The power efficiency of the machine will be 2Megawatts per peta-flop. Orders of magnitude more efficient than the Crays mentioned below.
A radical approach to Humanistic Intelligence for software development is Charles Simonyi's Intentional programming. A Microsoft technology which seems to languish in the research labs, if it has not been abandoned. The references to Simonyi on the website of Microsoft Research seems to have vanished. If you have any information on this technology more recent than the quoted article or what appears in the book Generative Programming then please tell me about it.
It's an amazing book. Reading it makes you feel that
The first experience is of course unfair to the later literature. But still, the second experience does indicate that researchers in the field of software project management have failed to make any significant progress.
My reading of the book was well timed with my reading of the cover story of the latest issue of Technology Review, about the poor quality of software today - and how it appears to have gotten only worse lately. There are two things in particular that interest me in this connection:
The second point is the easier to address, so let's do that first. There's a standard figure about the expansion in importance of science that says that of all the physicists ever working, 90% are active today. I imagine that number is an understatement for the case of software developers. In a trade that is as heavily expanding in importance and sheer volume as software design, maintaining a standard skill set, and standard technical literacy is an almost, if not fully, impossible task. This is both an issue of the throughput of the education system and an issue of the mental ability of the majority of software developers. The demand for developers therefore implies that the majority of the "technical community" that is software developers will be at at the beginning or intermediate level of professional ability, and moreover I suspect that they have grown up using mainly tools and environments designed specifically to be easy and approachable, not necessarily expressive and exact. Contrary to popular opinion I don't believe that ease of use of a tool necessarily makes the work- product produced with the tool better (although I will have good things to say about good tools later). So most of the developers in the world are conversant in JavaScript, VB Script and Visual Basic, and few of them are good LISP programmers. As mentioned earlier some of the most expressive and efficient development environments - for the trained user - are also some of the less accessible environments.
Back to the first point: Why did Object Orientation (OO) fail as a silver bullet for software design? Answer: It did not fail. It just succeded in disguise as component based development.
In OO design books the pervasive practice of component based development is often ridiculed as being only object based instead of fully object oriented. What we should learn from this is that the key component of object orientation (pun intended) is encapsulation, and the localization and naming of complicated things implied by encapsulation, not the entire bestiary of what you might call ontological linguistic devices available in full OO development environments. By referring to OO as ontological I mean the focus on things and their properties and making statements about them.
As it turns out the usefulness of active philosophizing (i.e. working explicitly with the ontological aspects of things, by doing OO modeling) is thoroughly dominated by the usefulness of simply having name and language for the concrete objects themselves, i.e. components. I'm not sure that it is surprising that there is more expansive demand for object users than for object makers The object maker ends up having a role similar to the language designer and the tool maker, and the trade of object making is therefore a much more select profession that that of the object user.
This development is related to another interesting development in software design, namely the birth of Software Pragmatics. This is important enough to merit capitalization.
To explain the grand statement, in linguistics pragmatics refers to the study of the relationship between language and context-of-use (This definition quoted from Speech and Language Processing. Among the topics of lingustic pragmatics are discourse analysis, and how language is used to model the world (semiotics). Software Pragmatics does the same thing for software development. The key discipline in Software Pragmatics is the pattern movement inspired by the well-known Design Patterns book. And of course there's the very influential book called The Pragmatic Programmer which is almost against theory, but just emphasizes everyday pragmatic thinking. I still think it's authors manage to make a great deal of very valid and general points, even though the scope of their book is not as grand as the scope of the patterns movement. Thirdly there is the new situation and conversation oriented project methodologies such as Extreme programming, which also fit the mold: Orienting the development of the craft towards elevating the quality of the work process and communication as opposed to a more theoretical rigorous invention of new technique which is the classical model for development of the trade.
The collection of books just mentioned have had a remarkable influence. I believe that the familiarization of huge numbers of programmers to component based development on the Windows platform is sort of the illegitimate father of this success. It certainly wasn't the heavy smalltalk bias of the original Design Patterns and Extreme Programming inventors. The component based development approach is exactly focused on reapplication of effective work processes as opposed to a more invention-like approach.
This brings us back to the original question of why Brooks' points on software development still apply. As pointed out by Brooks there have been numerous grand schemes to change the face of software development: Automatic programming, Generative programming, Expert systems, etc. Why the success of Component based development ? And why does this work where the grand schemes fail. When thinking about this I am reminded of Steve Mann's concept of Humanistic Intelligence. Humanistic Intelligence is a concept for intelligent signal processing that emphasizes the human brain as the intelligent processor of signals, as opposed to other concepts of machine intelligence, where the effort goes into adding intelligence to the machine. The idea behind HI is that it is orders of magnitude simpler to enhance the sensory experience of the human brain to include signals normally processed by machines as opposed to adding intelligent processing to the machines receiving the signals:
Rather than trying to emulate human intelligence, HI recognizes that the human brain is perhaps the best neural network of its kind, and that there are many new signal processing applications, within the domain of personal technologies, that can make use of this excellent but often overlooked processor
Software Pragmatics could be seen as an example of Humanistic Intelligence. The developers stays at the center of the development process, only his sensory experience is enhanced through better conceptualization via components and patterns and through better tooling. This also explains some of the more significant demographic shifts in the world of software developers. The rise of modern reflective languages like Java and Microsofts .NET platform are an indication that the most effective machine assistance in development is conversational/discursive in nature. Standard modern tools that programmers live by, like symbol completion and symbol browsing are examples of this. The developer is left in control, and automated programming is deemphasized (I consider the curse of the one-way wizard to be a step down in productivity).
Speaking of supercomputing, the connectivity of the brain compared to modern computer architectures is an interesting fact. Needless to say, this tells us that we have an insufficient understanding of the algorithms required for brain-like processing - but a more interesting question is whether the connectivity in itself adds a new complexity to the system that postpone the arrival time of brain-comparabel hardware. In other words, is the superscalar vector processing nature of modern supercomputers so decisive a simplification of the processing model that the hardware capacity of such a system simply is not comparable to brain capacity ? Among the questions implicitly asked: What is the memory/processing trade-off of the brain? What is the memory bandwidht?
There's a great promotional video on Cray's new line of supercomputeres here. The video is very interesting - among the facts is the average heat produced by the system: 45 watts per square centimeter! That is the processor heat - so for a kilo-processor system (i.e. one with 1000+ processors) we're talking tremendous power consumption. The computation power of the machine is enormous. They mention stat of 10 GFlops per processor (or was it board of 4) and a total system capability of 1024 of these boards meaning a total machine capability of 10 (or 40) TeraFlops. But of course THAT machine will consume insane amounts of power (.5 MW ? - sounds absurd but 1000 times 0.5 KW and you're there) Interestingly they're able to fit 16 4 processor boards in a single cabinet, so the largest machines will be 'only' 64 cabinets, which apart from the need for heat dissipation shouldn't consume too much space.
Need we mention that the estimated computational power of the brain is somewhere between 10^13 and 10^16 operations per second - so possibly 1000 times that of this new machine. And that the brain consumes about 25 watts of powerm so that energy efficiency of the brain compared to our present technology is 10^6 better or so. Room for improvement.
But - if we can keep inventing at Moore Law pace, the brain quality machine is only 10 years away. Add another 10 years to get decent power features for a single brain machine (and a high-power 'brainpower of a university' machine in research centers) and the world could be a very changed place as previously discussed by Bill Joy and Ray Kurzweil among others.
Apparently you can beat even the distributed guerilla force of P2P file sharing. The finest current service for audio dies as announced here.
UPDATE: February 2005: Updated the bookmark, so it works with current amazon link format. At least for books. At least in Firefox.
UPDATE: March 2008: Fixed again
The javascriptlet I talked about below, that would let you shop at amazon.co.uk for items found on amazon.com is in this next link : Find on amazon.co.uk. Drag to the toolbar, click when on a single-product page on amazon.com.
Yes, JavaScript links are dangerous.
No, you shouldn't even trust a nice guy like me.
Yes, an examination of the source will reveal that the link does no harm to you or your browser.
An interesting study of the effect of social factors (i.e. who does the coding) as opposed to technical factors (i.e. implementation language) in software implementation is rounded off with a final LISP example inthis article.
The conclusion social factors dominate when it comes to efficiency. Unfortunately developers are ineffective at evaluating their own qulaity of work, so metrics must be applied instead of subjective judgment. (or rather, metrics and subjective judgment are uncorrelated. Which is right is of course another issue)
This may not be surprising, but only strengthens the case for scripting languages, and newer, more reflective environments. Microsofts .NET strategy is of course a development in favor of this way of thinking.
Interesting is of course the fact that LISP is fast to write as well as execute and has been aroudn for ages. The only problem for list is the lack of integrability with many 'real world' environments. It's funny how old-school ideas like the UNIX shell and LISP still cannot be beat on fundamentals
Now that we're on the topic. There are still a few things they should do - from a european perspective.
The best amazon site wrt. Encyclopediability and book information is the american site, but of course shipping charges and delivery times are less than optimal. What would be nice then is easier access to browse the american site, but shop from the european sites. Obviously, one could do one's own fancy links - with embedded javascript to accomplish this, but still....
The good people at Amazon have the very best online shop (IMHO). Good service, good selection, easy discovery and very nice Encyclopediability (i.e. the property that you always learn something new when using their site by scanning the entry next to the one you were looking for - as when reading a paper encyclopedia). But even the best make mistakes. My latest receipt contains the following staggering numbers
-------------------------------------------------------------------------
Ordered Title Price Dispatched Subtotal
-------------------------------------------------------------------------
1 Speech and Language Processing 43.74 GBP 1 43.74 GBP
1 The Mythical Man-month 28.74 GBP 1 28.74 GBP
1 Applied Cryptography 47.29 GBP 1 47.29 GBP
1 How to Solve It 39.38 GBP 1 39.38 GBP
-------------------------------------------------------------------
Subtotal Including VAT: 159.15 GBP
Delivery Charges Including VAT: 8.64 GBP
Total Including VAT: 3,286,790.31 GBP
VAT 25.00 GBP%: 33.57 GBP
The VAT comes in a only 0.001% (not the 25% mentioned) so the taxes are nothing to speak of - but the handling fee of 3286622.52 seems excessive.
Of late, Microsoft and others have taken to using september 11 as a marketing device. The tactic is invariably the same is in a recent report of dubitable independence, namely the claim that open source software is less secure
simply because of the source availability. This claim is of course blatantly wrong. The evidence that security by obscurity i.e. that lack of publicly available information about a security flaw, protects agains exploitation has been discredited so many times that it is hard to find room to mention them all. The innumerable flaws in IIS and Internet Explorer, the deCSS story, the PDF/Dimitri Sklyarov story. The Enigma machine is an early story.
So we can only repeat once again that open discussion about security is the best means of security there is.
The nice and adventurous people at Google have done it again with the Google Catalog Search beta. You enter search phrases that appeared in mail orders catalogs (that's printed, paper mail catalogs) and they search for them returning a page with the words higlighted in yellow on the original catalog page!
That this is technically possible is not surprising, especially not for a happy Acrobat Capture user, but the logistics of this particular search engine must be hell. From the looks of the pages on the site is does not look like Google received electronic versions of catalogs, but just hired someone to scan them all, page by page.
I found a quote somewhere from Kevin Werbach from Release 1.0 to the effect that weblogs, webservices and wireless internet is the next world wide web. The excitement is supposedly back. Then he loses it by stating that You heard it here first. Is he kidding? It's been going on for a while now.
It is interesting though. To me, something like Radio Userland is mainly interesting because it challenges some of the design paradigms we've gotten used to, by efficiently moving control to the edges, using the shared space of the internet itself mainly for storage and to enable discovery.
This new edge controlled network is in the very early stages of formation, and resembles the WWW of 1992-1994: All content is essentially static, since the edge network - the control - is not generally available 24/7 or visible at all.
What kind's of dynamic content are possible on the edge controlled sometimes-on network? Is this finally the emergence of a real live architecture of software agents? Why should it be? Well, the edge-network needs some technology to move control about, and specifically to move control into some visible available space. The moveable control would be some kind of software agents.
It's obvious but I did not see this one until today: The ThinkGeek 9/11 T-shirt rm -rf /bin/laden.
What other language calls design documents apocalypses?
In in what other language would these design documents contain marvelous paragraphs like the following
Let's face it, in the culture of computing, regex languages are mostly considered second-class citizens, or worse. "Real" languages like C and C++ will exploit regexes, but only through a strict policy of apartheid. Regular expressions are our servants or slaves; we tell them what to do, they go and do it, and then they come back to say whether they succeeded or not.
At the other extreme, we have languages like Prolog or Snobol where the pattern matching is built into the very control structure of the language. These languages don't succeed in the long run because thinking about that kind of control structure is rather difficult in actual fact, and one gets tired of doing it constantly. The path to freedom is not to make everyone a slave.
However, I would like to think that there is some happy medium between those two extremes. Coming from a C background, Perl has historically treated regexes as servants. True, Perl has treated them as trusted servants, letting them move about in Perl society better than any other C-like language to date. Nevertheless, if we emancipate regexes to serve as co-equal control structures, and if we can rid ourselves of the regexist attitudes that many of us secretly harbor, we'll have a much more productive society than we currently do. We need to empower regexes with a sense of control (structure). It needs to be just as easy for a regex to call Perl code as it is for Perl code to call a regex.
You've got to love it. Even if you don't want to use it, you've got to love it!
Finally my DI.pm - Digital Identity perl - module has come to life. I've had a hard time finding time to write up this rather minimalist, but eminently practical module. I'm using DI login on my test server at last. Publication of version 0.01 of the toolkit some time this weekend on F9S
When you suck you can only hope the others suck more. France and Uruguay tied, eliminating the distress caused by Denmarks abysmal performance agains Senegal.
If you're transmitting information (even stochastic information) over a channel with noise (i.e. a random distortion of the data) there are good theorems and algorithms to recover the original variable from the distorted signal if you have a good model of the distortion. This is used ingeniously by IBM to protect privacy while still collecting customer information over the internet. Customer data is collected - but passed through a distorting filter. The filter safely eliminates any meaningful individual value of the original customer data, but the distribution of the original data can be recovered with good accuracy. Think a second about applying this to online voting. It would be theoretically sound, protect online voters from any possiblity of political pressure or abuse, but would be very hard to explain to the public.
How appropriate that the CEO of a company with a new strategy for
Open Source But Closed Binaries - threatening (in part) the open source movement by alienating non technical users, should be called Ransom Love...
Non technical quote "Well, according to Ransom Love, CEO of Caldera Systems".
Mainly driven by Germanys massive 8-0 defeat of Saudi Arabia the goals per match average of this years World Cup is still very high. The score for the first four games a staggering 3.5 goals per match. Sunday was a bit more relaxed, but only a bit - ending at 11/4 so we stay above a 3 goal average - nameley 25/8.
So Denmark is below average. But what goals they were! Not having seen Argentina yet, I still think ours is the best game yet.
UPDATE
Having now seen the awesome Argentina-Nigeria match it is now safe to say that Denmarks match is not the best one played at the World Cup. Both teams were very good. Sweden and England are in for some seriously tough opposition. The Nigerian team stayed well organized, had tremenous opportunities of their own, and only lost out because Argentina was - as expected - scary.
In a recent documentary on danish television a spokesperson for the radical fundamentalist Hizb-ut-tahrir movement angrily comments: Why is it that the west is always only told that in our society adulterers are stoned to death when 4 people have witnessed the adultery, and never that no one our society pays taxes? (quote not necessarily accurate - but the words were to this effect).
BECAUSE IT'S WHAT REALLY MATTERS you idiot.