The classy.dk kitchen server sits behind an ADSL router provided by my ISP. That router is sensibly almost closed with only FTP, HTTP, SMTP and DNS ports open by defaut and none of these mapped to NATted addreses that are assigned by default through DHCP on the router. I'm fine with that even if it is stupid ISP control of my actions - less security threats to worry about, and I can actually turn on windows on new machines without being owned by virus after 5 seconds.
The only server I have setup to listen to inbound traffic is the old warhorse classy.dk web server (and yes it is in fact located in my kitchen like it says on the blog.
Occasionally I'd like to access resources on other machines on the net though and that just blows. The problem is that the other machines sometimes run windows and most certainly shouldn't be listening to network traffic. I could use SSH tunneling via the webserver and then a terminal emulator to look at the hidden machines, but that's just annoying. I want full access with file browsing. The works.
A real VPN is needed but which one, how to set it up and how to pass it through an interface on the webserver?
Here's a way: OpenVPN with SSH tunneling.
Since I'm not talking more than one machine at a time I can just use the simple point to point setup with a static key. I want to modify the howto to work through an SSH tunnel.
proto tcp-server
remote localhost
and adding the line proto tcp-client
ssh -L1194:vpnserver:1194 user@webserver
Who knew? Windows actually supports symlinks. It's directory only - but that's just too bloody useful to hide away in a resource kit. Thank god we (at least for know) still have Sysinternals. Can't wait to see these guys stop making a difference as they try to work from inside the belly of the beast.
Anyways. Sysinternals provide the useful utility Junction to define... junctions - that's what they're called on Windows, directory symlinks.
This brings the well known "versioned directories, live verson symlinked" deployment technique to Windows.
Thanks for the heads up to that other Claus.
"I started keeping a list of these annoyances but it got too long and depressing so I just learned to live with them again. We really are using a 1970s era operating system well past its sell-by date. We get a lot done, and we have fun, but let's face it, the fundamental design of Unix is older than many of the readers of Slashdot, while lots of different, great ideas about computing and networks have been developed in the last 30 years. Using Unix is the computing equivalent of listening only to music by David Cassidy."
He's right and he is wrong. I think it's entirely likely to me that we'll find further down the road that software works much like genetic development in nature. Nature never throws out old designs. In fact most of our human basic design is the same as the basic design in fish and plants and bacteria and it hasn't changed in billions of years. However, the interest, the competitive edge, moves away from the old designs once they win and onto greater things. So i'm not sure we'll ever have new file systems or new anything really. I find it entirely likely that inside the massively parallel billion CPU core machine of 2050 we'll find a million linux 2.6 cores with ext3 filesystems...
I think we can already see this as OS'es get commoditized and the interest moves from scaling up to scaling out. Scaling out is a developer way of saying "I'm not going to fix the I/O DNA or the process DNA of computing, I'll just add sophistication on top".
The only real reason this isn't truly plausible on a 200 year scale is energy consumption. It's quite possible that in a truly parallelized world we'd really much rather have a much simpler operating system able to function on much less power, but robust and distributable.
[UPDATE should have read the whole thing and a minimum of stuff about plan 9 - which answers some of the questions, but the failure of plan 9 to catch on underscores the point - and it's clear from the interview that Pike is aware of this]
The question that then comes to mind: Suppose we wanted to build the multi-concurrent internet-ready super machine of the future, programmed entirely in a fanstastic functional language that is able to hide complexity and concurrency in an efficient way, what would we keep around?
Some ideas on design points:
(I think I need to start a blog specifically for spaced out posts)
Eye opening (well I'm unsure if it is - the failure of acceleration pointed out has been apparent for a while) piece on a fundamental sea change in computing technology forced by the breakdown of the previously available "free lunch" of exponential hardware improvement.
Improvements in dealing with concurrency (from functional programming comes tons of way to do concurrency without thinking explicitly about threads) is definitely something to watch.
The benefits by the way are already appreciable as concurrency is already a design problem to be reckoned with in distributed computing - and with everything moving to the web who is not doing distrbuted computing projects?
For the ultimate in concurrency we need to go to quantum computing of course.
Sweetness. Google actually does a bindump of exe files it finds in the world and indexes the resulting metadata. So you can basically search for executables online that use a particular DLL or expose a particular call.
Just had a look at the Dabble DB demo and it looks awesome. Simple and highly useful. Definitely signing up for this. (Yes, the whole deal with letting other people handle your important data is still scary - but soon somebody will start The Hosted Data Backup Company that does backups of your GMail, Writely, Dabble, Flickr, yada yada yada data repositories and they will in turn standardize the output formats of these apps. and that will in turn make desktop applications that back your data up easy to write. Markets at work. Lovely.