More and more intrusive direct marketing ideas involving email and tele-marketing are being developed, allowing those who use them to spend a maximum amount of time actually bothering people, instead of waiting around for people who could not be bothered. The techniques - and means to fight them, are discussed in the November issue of Wired.
I guess it is not really surprising, but one of the techniques is automated dialing so that tele-marketers do not spend time reaching people who aren't at home, but rather talk all day to potential customers. The upshot of this is that you actually have to wait for a an annoying telemarketer to become available once you answer the phone.
I'm now in NYC, and the Guggenheim has two amazing exhibits open. A Bill Viola exhibit (one new installation) and Moving Pictures, a collection of the art of the photograph and the moving picture through the last 30 years.
Both are stunning. Bill Viola is technically perfect and engaging and many other fine things as well, and the Moving Pictures exhibit is almost too much. The entire Rotunda filled with photographic and video art. Everything is great and nothing is missing.
En hurtig note fra Boston, der er fredelig og behagelig og mindre waspy end jeg lige gik og troede - eller det vil sige noget af den er s˚mænd ret waspy. Byen ligged smukt ved bed vandet - både havet og Charles River -og det betyder at der stort set altid er noget grønt i nærheden. Den europæske fornemmelse fornægter sig ikke, ligesom byens historie er tydelig. Det er mest stednavne og geografi den er tydelig på. Bygninger røber i virkeligheden ikke så meget, for de er mestendels nyere.
Anbefales. Videre til New York
The issue is really too important for shameless plugs, but can I just say that Ascio has had a technology for selective disclosure for some time now. Hear our thunder, said the mouse to the elephant....
This might just turn out to be a two-week cure for blogorrhea - On the other hand - I can post from the internet so who knows....
Claus
One of the most eagerly awaited Google betas - the Google Compute feature of the Google toolbar - has finally been released to the general public. This is SETI at home, only instead of searching for green men from outer space, you're participating in actual worthwhile research.
Instead you're participating in research to solve the protein folding problem reported on earlier.
This - in contrast to the green men - is actually worthwhile, even if the team at Stanford aren't necessarily the top scientists in the field (if they are, I'm sorry - but it always looked more like a 'whoa, we're young and know the internet, let's find some science to do' kind of project to me).
Further analysis of my web stats indicate that the notes on the Mythical Man Month is the most read article on the site. This is also consistent with the most wanted search results (except the helpmeleavemyhusband ones).
I'm certainly not done with the concepts I talk about in this log entry and most recently I've thought a little about the irony of the fact that I always, without fail disable auto completion features from 'Office-Style' applications, like mail programs, the browser (the form handling, not the history) and the word processor. This seems to be a contradiction of the happiness with which I embrace IDE enhancements like code completion. Why the difference. First of all - The standard auto-completion apps are a nuisance. They are quite simply badly done, and very intrusive. Second, I don't really trust Microsoft for a second to not send my keystrokes to shady businessmen. (So password suggestion is a very specific Thing That I Do Not Use). Thirdly, the lack of a model theory underlying the suggestions make them much less valuable. They are rarely the proper suggestion. In a word processing application you always need a specific grammatical form of a word, and the suggestion mechanism does not typically include grammar in the algorithm to guess words to suggest. Articifial languages like the ones used in computers have the quality that tokens are never modified to alter meaning, so seggestion mechanisms need not take this into account.
Of course word form is not the only interplay between language and code completion. The Delphi IDE has a particularly obnoxious model clash in it's code completion. When you're writing a sentence, and is mid sentence at an 'active' point in the sentence, i.e. the next token you write will be the first token of an expression identifying a value missing from the entire statement, Delphi has the good idea to require completion to return a value of a compatible type, since the sentence will not compile if there isn't one. However, the Delphi IDE is not able to accept the fact that you could write a sentence that computes a reference of the correct type, e.g. by descent through properties of objects nested within one another. This is particularly annoying : Code completion starts up, but does not allow you to navigate to the accessor you need, since it has the wrong type.
The Internet Explorer link history is another case in point. If you have used a site with many implicitly loader resources, the suggestion mechanism cannot do what my weblog analyzer can : Distinguish between navigable pages and implicitly loaded resources, so the history record is polluted with all kinds of gif's etc that hide the link suggestions that make any sense.
So to do a proper suggestion, you need an exact or a good statistical suggestion mechanism, and you need to allow for the grammar of expression that is currently in focus.
I am a Gauntlet Adventurer. I strive to improve my living conditions by hoarding gold, food, and sometimes keys and potions. I love adventure, fighting, and particularly winning - especially when there's a prize at stake. I occasionally get lost inside buildings and can't find the exit. I need food badly. What Video Game Character Are You? |
Wanting to get into business - need a Superhot business concept. Look not further. Rapid fire, superhot idea generation for the patently absurd generation.
I've just installed The Webalizer at home (no - not in public) to analyze the logfiles for classy.dk -Traffic is rather conservative but building nicely. I think that once I get ucmag.dk up and running again with a nice site - and of course once I get my act together with f9s, I'll really be one to something. The webalizer is easy to use and very fast to set up. Recommended.
Interesting site stats : 560 unique visitors (mostly garbage of course). Approx 1500 visits from those - and traffix is building.
Most interesting search engine referral : "Therese Damsholt". Most popular site engine referral : "helpmeleavemyhusband".
Awful truth: A lot of the traffic is just me using my site as navigation start page.
From IBM. Runs on your Pocket PC - translates between major languages.
Of course one should never make bold claims about the wonders of mechanical translation, or indeed of any claim within the broad area of artificial intelligence (if there is such a discipline) without providing proof that these claims could be reasonable.
As a simple test I used the latest web-demo of Systrans translation software on the log entry about machine translation in practice. Here are the results.
I took the paragraph:
The problem of correctly converting speech to meaning is hard enough, but if you can limit the responder to use a limited functional grammar - that could even be made an unambiguous closed subset of the open ambiguous grammar of the local language, you could conceivable have a valuable interchange with the speakers of this other language.
In Systran-French this is :
Le probl?me de convertir correctement la parole en signification est assez dur, mais si vous pouvez limiter le r?pondeur pour utiliser une grammaire fonctionnelle limit?e que pourrait m?me ?tre fait un sous-ensemble ferm? non ambigu de la grammaire ambigu? ouverte du langage local, vous pourriez imaginable avoir un ?change valable avec les parleurs de ce autre langage.
Which again (the true test) back-translates to this:
The problem to correctly convert the word into significance is rather hard, but if you can limit the responder to use a limited functional grammar which a nonambiguous closed subset of the open ambiguous grammar of the local language could be even made, you could conceivable have a valid exchange with the loudspeakers of this other language.
Clearly, this is not English spoken by a speaker of English - but for situations where conversational context can supply a lot of the meaning it isn't really all that bad. And certainly with the ability to further interrogate, speakers at both ends of the system could identify the typical style of error and adjust both their own utterances and their interpretation of the counterpart, to make some sense of the text.
As a further test, I iterated once more English->French->English and arrived at:
Problem to convert correctly word into significance is rather hard, but if you can limit the responder to use a limited functional grammar that a subset closed nonambiguous of the open ambiguous grammar of the local language could be even made, you could conceivable have a valid exchange with the loudspeakers of this other language.
This is really quite OK for degradation after four translations. Knowing the context of the conversation, I think you would get at least some of the meaning of the sentence.
It will come as no surprise - it is old hat to state that the killer app - quite literally - for augemented sensory reality through technology is the battlefield. The severity of the contest and the willingness to pay is unrivalled. So it will come as no surprise that the US Army is investigating the use of a Universal Translator Star Trek style. The solution to the comprehension problem is years off so basically the idea is to use a limited form of universal speech generation app.
Essentially these are just digital phrase books that actually speak out the required phrase in a number of languages, but using an approach not too unlike what I would think a child is doing to learn language you can do better.
You would work your way out from a sort of functional language model - using simple substitution rules for templated sentences to specifiy a large number of sentences that are precise enough. In the same fashoin you could have some comprehension, by using these sentences in an interrogation style conversation taking not much more than affirmative or negative cues from the responses you're given in a native tongue.
The problem of correctly converting speech to meaning is hard enough, but if you can limit the responder to use a limited functional grammar - that could even be made an unambiguous closed subset of the open ambiguous grammar of the local language, you could conceivable have a valuable interchange with the speakers of this other language.
This is not too unlike the approach in artificial languages like esperanto, except that the use of technology affords additional simplicity in that one could specify a functional subset of any natural language of interest, instead of forcing everybody to adapt to a shared functional language subset.
This seems to me to be a case for humanistic intelligence. The downconversion to a subset of your own language is close to effortless. - The translation of the downconverted language to a subset of another natural language could be feasible for a computer, and would make communication a lot simpler.
We all do this when we try to speak some language of last resort in a foreign country. Working out from stock phrases using a simple subset of techniques for constructing meaning in the foreign language, we try to make ourselves understood. This may be a personal experience of mine, but I always find the approach stumbles when I'm answered, in the signal processing phase - when I receive auditory input in response that is alien to me.
If I could have just a babelfish quality translation of words said to me in French with me when traveling I would be infinitely better off than I am today if forced to understand French.
A device to offer that kind of information should be possible today - even in a mobile device - certainly a personal computer of fairly recent model is sufficient, and the most powerful palmtops are only something like 6-7 years behind stationary devices in computing power.
So in short what I want on my PocketPC is MS Talk - It is a personal dictation program in any of the supported languages, and it offers translation of the quality Systran has been able to offer since the mid '90s of the dictated text. Very few sentences would make any sense at all - but just a fast translation of stock phrases, and common terms would help me book hotel rooms, find the train station etc.
Japanese telecommunications companies ave developed what can best be described as a Computer-Human-Human-Computer interface according to New Scientist.
Using the physical connect of a good old fashiond handshake, the report states that data transfer rated of 10 Mbit are possible - without discomfort to the transmitter one would hope.
This is one of those really old school science fiction concepts that have been around for so long that all reports on it finally being possible are immediately considered unbelievable, because the story is just too good. So one would hope that New Scientist has done some solid factchecking.
We all know about the 90/10 and 80/20 salary plans - where the last 10 or 20 percent respectively is dependent on the performance of the employer and the company in general. John Chambers - the CEO of Cisco is stepping forward and taking a 0/100 salary plan. His fixed salary is now one dollar. So that his only pay is the payoff from his stock option plan. I'm sure he can afford it after the golden years, but still. Way to make a statement Mr. Chambers.
In the ongoing series on quips about natural and artificial languages I sat and thought a while about the purpose of reflection in artificial languages.
At first glance reflection looks like a feature of language, but in reality it usually serves another purpose: It provides information about the model the language is speaking about.
In natural languages, the ambient knowledge about the world modeled by language is so strong that explicit reflection is rarely needed. In artificial languages this is not the case - and reflection provides the model knowledge that enables brevity of expression. It is not quite the same as meta linguistic expressiveness of natural languages, since metalinguistic abstraction of natural languages exactly relies on heavy ambient model data and deep conversational state. Or maybe more precisely, reflection in natural language is not as explicit, since reference to entities can be made indirectly, relying on deep state.
So we see now that the level of reflection of the language is paramount for productive brief expression in a language.
This again tells us something about where wizards fail, and what the purpose of IDE's really is at the very abstract level.
IDE's provide rich conversational state - adding to the ambient knowledge about the world modeled by the software. The better the IDE the more straightforward the conversation leading to code. Intentional and aspect oriented software initiatives are looking to dramatically raise the ambient level of conversational state.
Wizards also provide rich conversational state. But the second the conversation is done, the wizard forgets it ever took place. So the conversational state is gone and you are left with a very explicti very heavy uphill struggle to keep the momentum going. Often it is simpler to redo the conversation with variations. Repetitive actions are error prone, so the conclusion is that wizards are bad.
This then is the clear cut but sufficiently abstract reasoning behind previous notes on the requirements for good development methods.
A reader of classy.dk proposes a risk index for business proposals as Investment/(Experience * PitchLength). The Delmore Effect suggests that it could be that a lengthier proposal is in fact an indicator for less desire to win in the proposer. So risk may in fact be greater if the pitch is longer.
But of course length of pitch could be an indicator of a poorly articulated strong desire (i.e. anxiety preventing the proposer from getting to the point) as well as a well-articulated weak desire (Jargon-heavy detailed analysis of what-ifs that would make sense one year into the businessplan but not in getting the ball rolling)
It turns out I - like the rest of the world - is a victim of a phenomenon known as The Delmore Effect. Namely - my ability to articulate goals is better for less important activities. This is of course a hindrance for performance. I should probably do some reading on Paul Whitmore's dissertation on Defeating Delmore.
Archiving, categorizing, run-everywhere perl blogware in 91 lines of code at Blosxom. I must admit I post a lot from work, so webbased post editing is good to me, which is why I use Moveable Type. But yes. HTML textarea fields suck as editors go.
More fortuitous linkage from J. Udell's weblog - this time indirectly : Dealing with Diversity covers some of the same bases I failed to address properly in a previous incomplete post. Sam Ruby seems to like typing - but in an open permissive perlable kind of way.
I think types are fine - when pragmatic development practices make typing something that simply happens without the conscientious effort of the developer. I think most of the reservations against typing are related to this thinking also: Text-centric computation can make do without typing to a large extent because the ultimate consumer - the reader - can do all the type inferencing machines can only dream of.
Approaches at distributing large scale, open, universal type systems beyond the variable set of C seem to fail a lot - and I think this is mainly because it's just hard work. Strongly typed web-services do not subtract from this work without a new level of service of the systems consuming the data from the services. And even then the general guidelines on data before computation, and open by default data universes are the rules to go by.
This whole thing is of course also related to some of the remarks in this post on explicit versus implicit design, and failure or nonfailure of object design.
While I think the patterns movement and the mode of thought it supports is a usable way out of specification hell, I think explicit generalization in strongly typed languages through complex object interactions are a cost to the developer that is is not entirely clear is justified by functionality or reusability or clarity of form and function.
Once again I think the perlish notion of scripted orchestration of compiled code wins hands down in this respect. In weakly typed languages the orchestration can be quite briefly written, and if there is a well-written interface to strongly typed libraries the confrontation between strong and weak types need not hurt so much.
I found more notes related to the thoughts in a previous post on identity and the internet in a post on Jon's Radio - Jon Udell's always informative weblog. The point of his message is different but similar - The network will connect you regardless of your desire to connect. What remains in your power is what you look like when you connect.
The .NET implicit notion that there is no network latency and communication models that work well on the desktop automatically distribute is wrong.
Amen!
Think of Web Services as queues that consume and generate XML Schema Typed messages. That will force your design to be coarse-grain and loosely-coupled. Halleluja!
Found some notes on the E language (I don't know if the Ecstasy reference is deliberate).
It seems to be a good attempt at distributed personal computing - in fact what digital identities should enable. I like in particular the safe, private methods of dispatch. It remains IMHO to see if distributed codeis really the way to play. Personally I think located code and distributable data is the way to play, and I think all successful internet protocols are evidence in favor of my position. They almost never specify API's for distribution but only data formats.
My ideal notion of distributed information would not be distributed algorithms, but distirbuted data. The date would however be live in that it would have active and selective security policies applied, so it would be implemented as objectt, but they would only have accessors/properties not functions defined.