Tag: ubuntu

apt-add-repository failing in Ubuntu 14.04 because of threading.py

Ran into an interesting one today:

# sudo add-apt-repository ppa:ondrej/apache2

[omitted]

Press [ENTER] to continue or ctrl-c to cancel adding it

gpg: keyring `/tmp/tmpnv_gva8h/secring.gpg' created
gpg: keyring `/tmp/tmpnv_gva8h/pubring.gpg' created
gpg: requesting key E5267A6C from hkp server keyserver.ubuntu.com
gpg: /tmp/tmpnv_gva8h/trustdb.gpg: trustdb created
gpg: key E5267A6C: public key "Launchpad PPA for Ond\xc5\x99ej Sur�" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python3.4/threading.py", line 920, in _bootstrap_inner
self.run()
File "/usr/lib/python3.4/threading.py", line 868, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3/dist-packages/softwareproperties/SoftwareProperties.py", line 687, in addkey_func
func(**kwargs)
File "/usr/lib/python3/dist-packages/softwareproperties/ppa.py", line 370, in add_key
return apsk.add_ppa_signing_key()
File "/usr/lib/python3/dist-packages/softwareproperties/ppa.py", line 261, in add_ppa_signing_key
tmp_export_keyring, signing_key_fingerprint, tmp_keyring_dir):
File "/usr/lib/python3/dist-packages/softwareproperties/ppa.py", line 210, in _verify_fingerprint
got_fingerprints = self._get_fingerprints(keyring, keyring_dir)
File "/usr/lib/python3/dist-packages/softwareproperties/ppa.py", line 202, in _get_fingerprints
output = subprocess.check_output(cmd, universal_newlines=True)
File "/usr/lib/python3.4/subprocess.py", line 609, in check_output
output, unused_err = process.communicate(inputdata, timeout=timeout)
File "/usr/lib/python3.4/subprocess.py", line 947, in communicate
stdout = _eintr_retry_call(self.stdout.read)
File "/usr/lib/python3.4/subprocess.py", line 491, in _eintr_retry_call
return func(*args)
File "/usr/lib/python3.4/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0] UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 92: ordinal not in range(128)

Long story short, turns out it was due to my locale settings not being configured properly.

I applied the first suggestion of regenerating locales, and then the second one of setting default locales in /etc/default/locale, closing the SSH session, and opening it again. That fixed things.

Linux tip – regular expression find and replace in all files in a directory

As you may have seen me tweet, I’ve been looking for a way to do this. I didn’t want to manually change my Apache configuration to reflect my new internal IP address. After some Internet searching, I stumbled across this gem: http://www.linuxquestions.org/questions/linux-software-2/find-and-replace-text-in-multiple-file-203801/#post1742045

find . -name '[^.]*' | xargs perl -pi -e 's/192\.168\.1\.3/192\.168\.0\.3/g'

I adapted it to this for my task of replacing IP addresses. The first set of numbers is the old one (don’t delete the backslashes) and the second set is the new one.

This command assumes all files in the directory are configuration files and do not start with a dot.

Enjoy!

Update: According to a commenter, sed -i 's/thisip/thatip/g' * should also work. I didn’t try that since I thought it wouldn’t work with multiple input files.

Quest for a Flexible Development Environment

Update 2! I blogged a follow-up to this on my developer blog: http://kkdev.tumblr.com/post/26751052566/pros-and-cons-of-home-office-development-servers

Update! Track my planning and thought process in the public Quest for a Flexible Development Environment Evernote notebook!

(If you don’t have a lot of time, skip to Phase 5.)

This blog post is actually a big question. It’s hard to express this question in 140 characters, so I blogged instead of tweeting. Here goes:

How can I set up a development environment that can transition between being Internet-accessible and usable offline in an hour or less?

Let me expand on this by telling a bit about my past development environments.

Phase 1 – Local, Windows

Some time ago, I was developing with WampServer (similar to XAMPP) on Windows. This was OK in the beginning, especially before specializing in Drupal. Over time, however, and maybe somewhat due to starting to use Windows Vista, I noticed that performance was much slower than a Linux environment of similar configuration. This led to…

Phase 2 – Local, Linux via VirtualBox (Host: Windows)

Realizing that Linux (and for the curious, “Linux” nearly always means Ubuntu 8.04 or 10.04 in this post) was faster, I decided to set up a virtual machine (VM) in VirtualBox to use for development. With bridged networking, this posed no real issues. There were no real drawbacks other than it consuming some host system resources, but that was obvious. I was used to using VMs because I used them regularly at work. This approach was a step up from running locally only on Windows, and I actually used it for quite some time. I used it when I was traveling in Norway in late 2010, and it was great! I could work while on buses, while on ferries, etc. without worrying about having Internet access. I had VirtualBox’s host networking set up so that I could have network communication between the host and guest and still test my sites on the VM from Windows, the workflow I was used to. In fact, I would probably still be using this approach today, if the system (a laptop) I was on hadn’t started inexplicably using high amounts of CPU constantly, bringing the VM to a halt. This is to say, the VM performance was subject to the limitations of the host…if the host had hardware or host OS issues, it would limit my ability to develop in the guest VM.

During one bout of these issues, I tried another setup as a workaround…

Phase 3 – Pseudo-remote (physical machine in same location), Linux via NX Client for Windows

The irony of my workaround is that it became my standard. I think I started using this approach around mid- or late-2010 (probably around August). It allowed me to work as long as I had an Internet connection, and, since it was running on a physical machine, performance was arguably a bit better than a VM. I say arguable since it’s an old machine with 1 GB of RAM and a 2.4 GHz Pentium 4 processor. But hey…that’s Linux’s strong suit, right? I’d put in eAccelerator, and things would be fine for a development environment. The machine was on my network, so no issues there, and I could put it on a Hamachi VPN network to have the feel of it being local even when I’m remote. In fact, at the time of writing, the machine actually runs this site! I use DNS Made Easy‘s Dynamic DNS to keep the IP address up to date. I say “at the time of writing” since I plan to move it to a separate box eventually, especially since I expect its popularity to gradually rise as I network more…so it’s obviously a better idea. Going back to my setup, another nice thing about it (and those of you who saw Oliver Seldman and I’s Why Hooks? presentation at LA Drupal expereinced this) was that I could fall back to PuTTY and Vim if NX Client for Windows couldn’t connect. Perhaps somewhat ironically, that very fact of being a Vim user sort of saved me when I had to temporarily implement…

Phase 4 – Fully remote (VPS server), Linux via PuTTY

What drove me to this? After many years of service, my development machine’s hard drive started to go. I realized this when trying to check out a critical repository via Bazaar (of which I fortunately had a CrashPlan backup that I later recovered). Drive tests confirmed it. I was at a loss; I definitely needed to keep working. Thinking about my options and knowing I could develop in a shell, I chose and rented a low-cost VPS and moved over the projects I was working on at the moment, switched DNS to point there, etc. Worked reasonably well, and I used that setup for a week or two. Not as ideal because of the lower amount of resources, but definitely worked in a pinch. It also got me thinking about how I could ensure I could develop near-continuously (within working hours, obviously :)) without these sorts of infrastructure issues (which themselves take time to resolve)! Let me tell you about my current phase, and then I’ll summarize what I’ve said and eagerly await your responses, whether they are as blog comments or directed at @wizonesolutions on Twiter.

Phase 5 – Phase 3 + Phase 4 + repository server

Phase 5 finds me back at Phase 3 for my main setup. I got a new hard drive for the server and installed Lubuntu on it fresh (which was a good move – it’s lighter and faster when remoting in). I had my /etc configuration backed up, so I’ve been restoring my Apache sites incrementally. Much of my other configuration was already saved on my repository server. What’s a repository server? It’s a place for Bazaar and Git repositories (as well as Subversion, Mercurial, and CVS ones – it’s just that I don’t have any of those). This allows me to have some redundancy in repository location; when that server goes down, the important ones will already be checked out elsewhere, and the not-so-important ones will have been backed up in an older snapshot of the repository server and be able to be restored unless that snapshot is lost (which I plan to guard against by backing up the server to CrashPlan, Amazon S3, or the VPS host’s backup service – undecided as yet). I realized I needed more redundancy after the Phase 3 -> Phase 4 issue. I’ve had decent backups in place, at least, for quite some time…that definitely helped, and I recommend everyone reading this makes sure they do too. You’ll hug yourself when you need to restore from them. I once lost original wedding photos to a replaced hard drive when I sent in a laptop for repair. I’ve also lost some photos backed up on an iPod. You can bet I wasn’t going to have a repeat of that. I even back up my external hard drives now (they contain some older files not on my main ones)…the key being to have nothing in only one place – but if I do, to prefer the cloud over a physical location.

The Future – Phase 6 – Hybrid setup? Synchronization? Puppet? Chef? Database replication?

This phase has a lot of questions, but the goal is to be able to switch between online (requires Internet or local network connection) and offline (virtual machine, no Internet required). Additionally, the offline setup needs to be configurable on a laptop, since I’d be using that setup primarily when traveling on modes of transport with slow or non-existent Internet. My preferred work environment would still be the one requiring Internet, since that would make it accessible from anywhere I could use SSH, provide a separation of concerns (not having one machine do everything), etc. While offline, I would accept that I would not have repository access – that’s alright, since with Git and Bazaar, I can commit offline, and then push my changes later. This works even with bound branches in Bazaar, with the caveat that I have to unbind them first and rebind them later, potentially encountering conflicts. I could live with this (or simply have mirror branches that remain bound and working branches where I actually make the commits. I would then push the commits to the mirror branches when online; I’ve used this setup before with great success). The main offline requirement would be to be able to develop and test sites on the LAMP stack. Access to manuals and documentation for these while offline would be a bonus but isn’t required.

Synopsis of approaches I’ve tried

Phase 1 – Local, Windows:
Supports offline work: Yes
Acceptable performance: So-so
Sustainable: No
Similar to deployment environment: Probably not
Data safety: Same as rest of system

Phase 2 – Local, Linux on Virtualbox on Windows:
Supports offline work: Yes; host networking required for network communication; can be set up while offline
Acceptable performance: Yes, but dependent on host system
Sustainable: Yes, but VM data should be backed up
Similar to deployment environment: Yes
Data safety: If VM hard drive file is lost, it’s all gone if not backed up externally. Only easily accessible while VM is running, although I have read about ways to mount the drive while it’s off. Also, VM data file will take a couple days to back up online, depending on residental Internet’s upload speed

Phase 3 – Pseudo-remote (physical machine in same location), Linux via NX Client for Windows
Supports offline work: No
Acceptable performance: Yes
Sustainable: Yes
Similar to deployment environment: Yes
Data safety: Essentially yes; only normal concerns taken for any system apply

Phase 4 – Fully remote (VPS server), Linux via PuTTY
Supports offline work: No
Acceptable performance: Yes
Sustainable: Yes
Similar to deployment environment: Yes
Data safety: Yes, but less so than a physical system due to lack of access to the hardware (e.g., the host may not be willing to recover the data from a failed node drive, etc.) – so remote backup is critically important, although if it’s primarily a dev machine, code repositories may mitigate this risk if they are suitably backed up themselves

Phase 5 – Phase 3 + Phase 4 + repository server
Supports offline work: No
Acceptable performance: Yes
Sustainable: Yes
Similar to deployment environment: Yes
Data safety: Depends on which environment is active. See Phase 3/Phase 4
Another point of consideration is that if either machine crashes and becomes inaccessible prior to migrating the work environment for active projects, only code pushed to the code repository can be retrieved immediately.

My wish list for Phase 6
Supports offline work: Yes
Acceptable performance: Yes
Sustainable: Yes
Similar to deployment environment: Yes
Data safety: Same as Phase 3, at least
Time to switch between online/offline: <= 1 hr
Bonus features: Automated synchronization, automated switching,  access to manuals/documentation for PHP/Apache/Drupal

With your help, I hope to get to phase 6. I’ll try to keep this post updated as my strategy develops. Thank you in advance!

P.S. This post is kind of hard to grok. Please leave suggestions for how I can make it easier to read. I want people to actually read it and comment. If they can’t, they won’t bother. Thanks!