Sun, 27 Apr 2014
Personal backups
My main computers nowadays are:
- My personal laptop.
- My work laptop.
- My phone.
Given that, and given my propensity to start large fun home networking related projects but then leave them unfinished, here is my strategy for having reliable backups of my personal laptop:
- Buy an external hard disk, preferably the kind that requires no external power supply.
- Store it at work, and make an encrypted filesystem on it.
- Once per two weeks, take my personal laptop to work, where I will connect the external disk over USB, and do backups to it using e.g. dirvish (which is something like Apple's Time Machine software).
- When the backup finishes, use a post-it note to write today's date on the external disk, then put it back in my work filing box.
This seems to have the following advantages:
- No decrease in privacy -- the data is stored encrypted.
- Convenient off-site storage.
- I don't have to be thoughtful about what I am backing up.
- Since I'll be using dirvish, restoring data will be easy.
If people have any thoughts about this, or do things a different way and have pros or cons to share, I'd love to hear.
I realize this doesn't protect my phone or work laptop. I'll work on those some other time.
[] permanent link and comments
Sun, 28 Jul 2013
Recommendations on setting up wifi repeaters
My old housemate Will emailed me saying he wanted to get a second wifi router to use as a repeater. I realized that I haven't written my standard recommendation down ever, just repeatedly used it to great success. So, here it is:
My general recommendation here is to build your own "wifi repeater" out of two "wifi routers", rather than buying something that calls itself a repeater. It relies on Ethernet bridging rather than any advanced wifi technology, which in my opinion makes it easy to diagnose.
+---------------------------------------------------------------------+ | Main wifi router | | uplink port 1 port 2 port 3 port 4 | +-+-----+-----------------+------+--+---------+--+-------+---+------+-+ | | | | | | | | | | +-----+ +---+--+ +---------+ +-------+ +------+ | | | | | | V | +-----------+ | | cable | | | modem or | | | whatever | | +-----------+ V +----------------+ +------------------------+ | | a/c power | | | ethernet over +------------------->| ethernet over | | powerline | | powerline | +----------------+ +----------+-------------+ | | DO NOT USE!!! +------+ V +--------------------->| | +-------+ +---------+ +----------+ +--------+ |uplink| |port 1 | | port 2 | |port 3 | | port 4 | +------+------------+-------+---+---------+--+----------+--+--------+ | second wifi router (DISABLE DHCP on internal network on 2nd rtr) +----------------------------------------------------------+--------+
If you don't want to do ethernet over powerline between them, you can do regular old Ethernet.
Crucially, you must disable DHCP for the second wifi router. Then anyone on the second network will have their DHCP broadcasts answered by the first wifi router.
Also you should manually set the admin IP on the second router to something like 192.168.1.2 if the main network uses 192.168.1.1 as the admin IP address.
Do not connect the second network's uplink port to anything.
It doesn't really matter if you set the ESSID (network name) of both networks to be the same or different. I would gently recommend setting them to be the same, and have the same key, so people's laptops can happily roam between them.
Note also that if you need wired network connectivity for computers near "2nd router", any computers you plug into the "port 1-4" ports will work fine. And the choice of port 1 on "Main wifi router" and "port 3" for "second wifi router" as the connection points is totally arbitrary.
Happy wifi-ing!
[] permanent link and comments
Mon, 06 May 2013
asheesh.org/scratch/ back in business
For a few years, I had been storing public notes to myself (that might possibly be useful to others) at http://asheesh.org/scratch/.
Then OpenHatch happened in May 2009, and I paid decreasing attention to that site.
Eventually, as a semi-unprotected MediaWiki instance, it became spammed to smithereens.
Last night and this morning, I did the following things:
- Made it so only sysops can edit the site.
- Clicked every link off the front page, and manually reverted it to the most recent non-vandalized page I could find.
Now you can more easily read my scratchy notes, like:
Honestly, it is a huge relief to see those old bits of text back on the web. It makes me feel so much more pleasantly connected to the timeline.
[] permanent link and comments
Fri, 25 Jun 2010
rose in Japan is down; time to make backups
For some reason, my server in Japan is down. (This website is hosted from Minnesota.) For that reason, freeculture.org is down.
Today is a good day to remember that I should make frequent backups. I'm doing a backup run of the Minnesota machine right now.
[] permanent link and comments
Sun, 26 Apr 2009
Comments
What if there were comments on asheesh.org?
Discuss.
[] permanent link and comments
Sun, 05 Oct 2008
qemu IP address patch
I sometimes use the qemu virtualization system, or its cousin kvm, for creating virtual computers to test software in. Conveniently, qemu makes networking those really easy.
Unfortunately, the IP addresses it assigns for virtualization happen to be in the same subnet as my desktop at work (at CC, 10.0.2.x). I had some fear of changing a piece of software as presumably complex as qemu.
I forged ahead and came up with a patch that I posted to the qemu-devel mailing list. I'm just wring this post in case someone wonders, "How can I change the IP address of the user net layer used by qemu to avoid a conflict?"
The answer is as easy as replacing the string "10.0.2" with "10.0.3" globally across the qemu codebase and recompiling. If that mailing list post ever goes away, I have a local copy of the patch.
(This work was sponsored by CC, but pending an okay from CC, you should be free to use it under the terms of the WTFPL.)
[] permanent link and comments
Sat, 04 Oct 2008
What are your most expensive websites to run? Patching Apache to find out
When running a busy webserver, one may want to know how much server time is spent preparing each request. That would be especially useful if broken-down per web site you host. Server processing time indicates things like how long MySQL queries took, or how loaded the disks are; in general, they are the measure of how difficult it was to answer a request. It may also be interesting to compare server time spent processing a request today to the same request's time in the past as an indication of how system changes (upgraded disks, more complex filesystem) have affected your ability to process web requests.
Apache's mod_log_config lets you log how long a request takes from start to end, which includes the amount of time taken to send the actual data. That can be imagined as server_processing_time + time_to_send_data_to_client. I wasn't interested in seeing how slow or fast clients' net connections were.
In a project I named vhost_effort, I wrote a patch to Apache to be able to log just that server time spent from the start of the request to when the request is ready to be sent. That work was done at Creative Commons, and the software results are available under the Apache 2.0 license. vhost_effort.py is a hack that generates a pie graph for how much server time is spent on each vhost (among other sorts of visualizable statistics). I began thinking of using a visualizer for disk usage to make the pie graph interactive, but by the time I was nearly done working that out we had already gathered all the data we needed.
My projects page has a link to the code in the Creative Commons Subversion repository. I did write about this at labs.creativecommons.org a year ago also.
Code in Creative Commons Subversion.
[] permanent link and comments
Fri, 26 Sep 2008
Announce and discuss lists
I have a habit of entering a community and leaving both an announce and a discuss list wherever I go. The wisdom of this is still unresolved. I thought I'd share one thing I do beyond that: set the reply-to header on the announce list to go to the discuss list.
That way, when there's an announcement and the peanut gallery wants to add something, they'll reply and the people interested in hearing more will hear it.
I remembered this upon reading that the BALUG lists have the same sort of split, and that in particular that they were considering (on an opt-out basis) auto-adding people from discuss to announce.
[] permanent link and comments
Thu, 21 Aug 2008
dd, dd_rescue, and ddrescue
The short answer: "Use GNU ddrescue. GNU stands for Quality."
dd is a classic UNIX utility to read from and write to files (often devices). Typically, one uses it to copy a hard disk to a file, or to image a hard drive by copying a backup onto it.
One hits a problem when the hard disk has errors. In this case, dd abruptly stops working in the middle, reporting an "Input/output error." But when the hard disk has errors, usually what you want is to get an image of all the blocks on the hard disk that are readable - not just the first few before the first error!
(Note for the pedantic: Yes, I know about dd conv=notrunc,noerror. They're so easy to misuse (mostly by forgetting one of those two options) that they're worth avoiding.)
Two tools are available for this particular purpose. Confusingly, one is called ddrescue, and the other is called dd_rescue.
Around 2001, Kurt Garloff wrote dd_rescue. It does what dd does if you pass it some options, but it comes with instructions on how to use it to recover data from drivers, like by running it multiple times or bakcwards. A wrapper script called dd_rhelp automates that process.
When you're running dd_rescue on an obscure OS like Mac OS X 10.3 because you dropped your laptop in Uganda and the Linux partition grew bad blocks and you still want your data, you will find that dd_rhelp is written as a complicated shell script that relies on GNU versions of core system utilities. OS X provides non-GNU versions, and you will waste hours fiddling with compiling those utilities just so you can run some dumb shell script.
In the summer of 2004, the same summer as I dropped my laptop, Antonio Diaz Diaz wrote "ddrescue," a stand-alone C++ tool that does the same things as dd_rhelp, but more sanely and therefore more efficiently. It became an official GNU project. GNU ddrescue, like dd_rhelp, can keep a log file to let itself gracefully pick up after interrputions.
When your hard disk fails, you should turn to your backups. But if you need a tool like these, just remember: "GNU ddrescue."
$ sudo apt-get install gddrescue
[] permanent link and comments
Wed, 13 Aug 2008
Sending mail from a laptop
I often find myself on what I would call "hostile" networks: They allow only very limited Internet access, like by blocking port 25 so I can't connect to my mail server. Maybe for you, you're never on filtered Internet access, but your home ISP doesn't let you send mail out when you're not at home, but you want to send email directly from your laptop anyway.
Just do what I do! Let me explain.
Summary
- inetd listens on port 125
- Connections to it go through an SSH tunnel that executes "nc localhost 25" on some mail server
- (Optional) A real MTA runs on the laptop, so that I can send mail when offline; when mail delivery fails temporarily, Postfix queues the message until I get back online.
Justification
- Easy. Apps can be configured to use localhost port 25 (or port 125) with no password.
- Correct: Postfix (when using 25) handles sending mail when offline, and reattempts delivery for me.
- Secure: Encryption all the way through the network, with the icing on the cake that this all looks like SSH, so nosy networkers near your laptop can't even see that's what you're doing.
Implementation in Three Steps
Step 1: ssh tunnel
This is the hardest part. To make things simple, I create a dedicated user on each end.On the remote server (server)
[me@laptop] $ ssh me@server [me@server] $ sudo adduser tunnelendpoint [me@server] $ sudo su - tunnelendpoint [tunnelendpoint@server] $ mkdir .ssh
On the local machine (laptop)
[me@laptop] $ sudo adduser tunnelclient [me@laptop] $ sudo su - tunnelclient [tunnelclient@laptop] $ ssh-keygen -t rsa # make it passwordless [tunnelclient@laptop] $ cat .ssh/id_rsa.pub | ssh tunnelendpoint@server 'mkdir -p .ssh ; chmod 0700 .ssh ; cat >> .ssh/authorized_keys'
On the remote server
[me@server] $ sudo su - tunnelendpoint [tunnelendpoint@server] $ nano -w .ssh/authorized_keysYou'll see a key that starts with "ssh-dss". Before that, add this string and leave a space before "ssh-dss":
command="nc localhost 25",no-X11-forwarding,no-agent-forwarding,no-port-forwarding
(Note: "nc" is in the netcat package.)
On the local machine (laptop)
[tunnelclient@laptop] $ ssh tunnelendpoint@server 220 rose.makesad.us ESMTP Postfix (Debian/GNU): "every tragedy is a beauty that has passed"
Hooray! If you see a reply like mine that starts with "220", then all is well.
You're done with the hard part. Now the easy parts.
Step 2: inetd
[me@laptop] $ sudo aptitude install openbsd-inetd
Now edit /etc/inetd.conf to have this line:
127.0.0.1:125 stream tcp nowait tunnelclient /usr/bin/ssh -q -T tunnelendpoint@server
Now restart the inetd (sudo /etc/init.d/openbsd-inetd restart) and test it:
[me@laptop] $ telnet localhost 125 220 rose.makesad.us ESMTP Postfix (Debian/GNU): "every tragedy is a beauty that has passed"
Step 3: Postfix (optional)
This is my favorite part, but it's only necessary if you plan to send email when you're not connected to the Internet.
Just install Postfix, and add this to /etc/postfix/main.cf:
relayhost = 127.0.0.1:125
Restart Postfix and you should be set. Try sending some mail!
Closing
I was inspired by a Debian Administration post, except I had my own ideas about the best way to do it. I still like my way best.
One problem with the above approach is that it requires root on "server". It would be possible to do the ssh tunnel thing without using a separate "tunnelendpoint" account, but instead to add that key to your regular username.