Category: Life of a Web Developer

How-to: Create Drupal development sites in Quickstart

Yesterday, I felt like reviewing some patches, so I fired up my Quickstart-based virtual machine and set about creating some Drupal development sites. I realized I first had to create Drush Make files to get the proper development versions installed. So I did that. However, I also realized that, despite cloning the code via Git and checking out a particular branch, the Git clone was not actually a Git repository. This is because Drush Make requires the –working-copy switch in order to do this. I’ve posted a workaround on the Quickstart issue queues. This post mostly serves as pointers to a couple things:

Meetup API Tools Seek Co-Maintainer, Further Developments Possible

Always wanted to entitle something like that. Anyway, I’m certainly not getting anywhere fast on PHP Meetup API Client or Meetup API on drupal.org.

There are issues on both expressing this. Reply to them or contact me.

Meetup API on drupal.org issue: http://drupal.org/node/1194532
GitHub issue: https://github.com/wizonesolutions/meetup_api/issues/4

Drupal Camp Sacramento Area 2011 Conference Report

This weekend, I attended the first DrupalCamp in the Sacramento, California area. It happened to be held in Davis, a location which worked for me.

Some of you might know that I was talking about taking Amtrak’s Coast Starlight up to the Bay Area. I indeed did. Here’s some pictures: (flickr link coming soon; I have to upload the pictures).

But as for the camp itself, it’s best to break it down into the sessions I attended and then give my overall impressions.

Day 1

As every attendee certainly did, I started off my day listening to Nate Haug (@quicksketch)’s keynote speech. He talked about the community, contributing, contributing productively, collaboration, and the care shown to community members. It was a good start. After that, I went to my first session.

10-11:30: VoIP for Drupal: Turning Drupal into a phone system

The title of this presentation had intrigued me, and I’m glad I checked it out. Adam Kalsey (@akalsey) of Tropo did a fantastic job of demonstrating the VoIP module and how command sets could be sent to phone systems using PHP code.

1:00-2:30 – Using Drupal as an Application Development Platform

This was a neat presentation as well. It was also presented by Adam Kalsey. His thesis was essentially that Drupal is an application development platform that ships with a great CMS as its default implementation. He defended this fairly, outlining many of the subsystems that I indeed deal with regularly.

2:30-4:00 – Building a Distribution using Features, Drush Make, Installation Profiles, and more

Ben Shell gave a fascinating presentation on the topic above. I found this very useful, as it cleared up some questions I had regarding the whole thing. I liked how he spoke a bit about how to get drupal.org to fully package your distribution or installation profile for download!

4:00-5:30 – Streamline your workflow with Fill PDF – fill your PDF templates with your site’s forms

Some dude who came from L.A. gave this one. I think his name was Kevin Kaland or something. Of course it was awesome; would I say otherwise? Fortunately, you don’t have to listen to me; Doug R. Wu has given a brief “str8up” account of the talk. That coupon code expires Monday, by the way.

Day 2

Morning-12 – Code sprint

Saturday ran late for some reason, and I got lost on the way back to campus, so I rolled in around 11 AM. I discovered that no organized code sprint was happening, so I worked more on adding Webform token support to Fill PDF on Drupal 7. This is completed now.

1:00-2:30 – Why Drupal uses hooks, and why you should too

I bumped into that Kevin Kaland guy again at this talk. Something about hooks in Drupal. People liked it or something. (If you blogged about this talk, can you link to it in the comments?)

2:30-4:00 – Know Where The Fire Is (Monitoring Drupal Sites)

I wrapped up my camp with Mike Hathaway’s Nagios talk. It was cool; Nagios is definitely a tool l will have to try some time, along with the Drupal Nagios module of course!

Conclusion

So ended my camp, and so began my transportational journey back…with a new sticker on my laptop!

Linux sed trick – Remove line from file by number

I wanted to document this before I forgot it. To remove a single line from an existing file with sed, use:

sed -i '[num]d' [filename]

For example, to remove line 1 from ~/.ssh/known_hosts (my exact use case right now), type:

sed -i '1d' ~/.ssh/known_hosts

sed is a neat little tool. I use it to quickly copy and change Apache virtual hosts sometimes as well. I recommend learning a bit about it and its ‘s’ command as well.

P.S. If you don’t use the -i switch, sed will output the result after replacements instead of changing the file. You can pipe (|) this output to other programs or redirect (> or >>) it to a file.

Happy sed…ing.

Quest for a Flexible Development Environment

Update 2! I blogged a follow-up to this on my developer blog: http://kkdev.tumblr.com/post/26751052566/pros-and-cons-of-home-office-development-servers

Update! Track my planning and thought process in the public Quest for a Flexible Development Environment Evernote notebook!

(If you don’t have a lot of time, skip to Phase 5.)

This blog post is actually a big question. It’s hard to express this question in 140 characters, so I blogged instead of tweeting. Here goes:

How can I set up a development environment that can transition between being Internet-accessible and usable offline in an hour or less?

Let me expand on this by telling a bit about my past development environments.

Phase 1 – Local, Windows

Some time ago, I was developing with WampServer (similar to XAMPP) on Windows. This was OK in the beginning, especially before specializing in Drupal. Over time, however, and maybe somewhat due to starting to use Windows Vista, I noticed that performance was much slower than a Linux environment of similar configuration. This led to…

Phase 2 – Local, Linux via VirtualBox (Host: Windows)

Realizing that Linux (and for the curious, “Linux” nearly always means Ubuntu 8.04 or 10.04 in this post) was faster, I decided to set up a virtual machine (VM) in VirtualBox to use for development. With bridged networking, this posed no real issues. There were no real drawbacks other than it consuming some host system resources, but that was obvious. I was used to using VMs because I used them regularly at work. This approach was a step up from running locally only on Windows, and I actually used it for quite some time. I used it when I was traveling in Norway in late 2010, and it was great! I could work while on buses, while on ferries, etc. without worrying about having Internet access. I had VirtualBox’s host networking set up so that I could have network communication between the host and guest and still test my sites on the VM from Windows, the workflow I was used to. In fact, I would probably still be using this approach today, if the system (a laptop) I was on hadn’t started inexplicably using high amounts of CPU constantly, bringing the VM to a halt. This is to say, the VM performance was subject to the limitations of the host…if the host had hardware or host OS issues, it would limit my ability to develop in the guest VM.

During one bout of these issues, I tried another setup as a workaround…

Phase 3 – Pseudo-remote (physical machine in same location), Linux via NX Client for Windows

The irony of my workaround is that it became my standard. I think I started using this approach around mid- or late-2010 (probably around August). It allowed me to work as long as I had an Internet connection, and, since it was running on a physical machine, performance was arguably a bit better than a VM. I say arguable since it’s an old machine with 1 GB of RAM and a 2.4 GHz Pentium 4 processor. But hey…that’s Linux’s strong suit, right? I’d put in eAccelerator, and things would be fine for a development environment. The machine was on my network, so no issues there, and I could put it on a Hamachi VPN network to have the feel of it being local even when I’m remote. In fact, at the time of writing, the machine actually runs this site! I use DNS Made Easy‘s Dynamic DNS to keep the IP address up to date. I say “at the time of writing” since I plan to move it to a separate box eventually, especially since I expect its popularity to gradually rise as I network more…so it’s obviously a better idea. Going back to my setup, another nice thing about it (and those of you who saw Oliver Seldman and I’s Why Hooks? presentation at LA Drupal expereinced this) was that I could fall back to PuTTY and Vim if NX Client for Windows couldn’t connect. Perhaps somewhat ironically, that very fact of being a Vim user sort of saved me when I had to temporarily implement…

Phase 4 – Fully remote (VPS server), Linux via PuTTY

What drove me to this? After many years of service, my development machine’s hard drive started to go. I realized this when trying to check out a critical repository via Bazaar (of which I fortunately had a CrashPlan backup that I later recovered). Drive tests confirmed it. I was at a loss; I definitely needed to keep working. Thinking about my options and knowing I could develop in a shell, I chose and rented a low-cost VPS and moved over the projects I was working on at the moment, switched DNS to point there, etc. Worked reasonably well, and I used that setup for a week or two. Not as ideal because of the lower amount of resources, but definitely worked in a pinch. It also got me thinking about how I could ensure I could develop near-continuously (within working hours, obviously :)) without these sorts of infrastructure issues (which themselves take time to resolve)! Let me tell you about my current phase, and then I’ll summarize what I’ve said and eagerly await your responses, whether they are as blog comments or directed at @wizonesolutions on Twiter.

Phase 5 – Phase 3 + Phase 4 + repository server

Phase 5 finds me back at Phase 3 for my main setup. I got a new hard drive for the server and installed Lubuntu on it fresh (which was a good move – it’s lighter and faster when remoting in). I had my /etc configuration backed up, so I’ve been restoring my Apache sites incrementally. Much of my other configuration was already saved on my repository server. What’s a repository server? It’s a place for Bazaar and Git repositories (as well as Subversion, Mercurial, and CVS ones – it’s just that I don’t have any of those). This allows me to have some redundancy in repository location; when that server goes down, the important ones will already be checked out elsewhere, and the not-so-important ones will have been backed up in an older snapshot of the repository server and be able to be restored unless that snapshot is lost (which I plan to guard against by backing up the server to CrashPlan, Amazon S3, or the VPS host’s backup service – undecided as yet). I realized I needed more redundancy after the Phase 3 -> Phase 4 issue. I’ve had decent backups in place, at least, for quite some time…that definitely helped, and I recommend everyone reading this makes sure they do too. You’ll hug yourself when you need to restore from them. I once lost original wedding photos to a replaced hard drive when I sent in a laptop for repair. I’ve also lost some photos backed up on an iPod. You can bet I wasn’t going to have a repeat of that. I even back up my external hard drives now (they contain some older files not on my main ones)…the key being to have nothing in only one place – but if I do, to prefer the cloud over a physical location.

The Future – Phase 6 – Hybrid setup? Synchronization? Puppet? Chef? Database replication?

This phase has a lot of questions, but the goal is to be able to switch between online (requires Internet or local network connection) and offline (virtual machine, no Internet required). Additionally, the offline setup needs to be configurable on a laptop, since I’d be using that setup primarily when traveling on modes of transport with slow or non-existent Internet. My preferred work environment would still be the one requiring Internet, since that would make it accessible from anywhere I could use SSH, provide a separation of concerns (not having one machine do everything), etc. While offline, I would accept that I would not have repository access – that’s alright, since with Git and Bazaar, I can commit offline, and then push my changes later. This works even with bound branches in Bazaar, with the caveat that I have to unbind them first and rebind them later, potentially encountering conflicts. I could live with this (or simply have mirror branches that remain bound and working branches where I actually make the commits. I would then push the commits to the mirror branches when online; I’ve used this setup before with great success). The main offline requirement would be to be able to develop and test sites on the LAMP stack. Access to manuals and documentation for these while offline would be a bonus but isn’t required.

Synopsis of approaches I’ve tried

Phase 1 – Local, Windows:
Supports offline work: Yes
Acceptable performance: So-so
Sustainable: No
Similar to deployment environment: Probably not
Data safety: Same as rest of system

Phase 2 – Local, Linux on Virtualbox on Windows:
Supports offline work: Yes; host networking required for network communication; can be set up while offline
Acceptable performance: Yes, but dependent on host system
Sustainable: Yes, but VM data should be backed up
Similar to deployment environment: Yes
Data safety: If VM hard drive file is lost, it’s all gone if not backed up externally. Only easily accessible while VM is running, although I have read about ways to mount the drive while it’s off. Also, VM data file will take a couple days to back up online, depending on residental Internet’s upload speed

Phase 3 – Pseudo-remote (physical machine in same location), Linux via NX Client for Windows
Supports offline work: No
Acceptable performance: Yes
Sustainable: Yes
Similar to deployment environment: Yes
Data safety: Essentially yes; only normal concerns taken for any system apply

Phase 4 – Fully remote (VPS server), Linux via PuTTY
Supports offline work: No
Acceptable performance: Yes
Sustainable: Yes
Similar to deployment environment: Yes
Data safety: Yes, but less so than a physical system due to lack of access to the hardware (e.g., the host may not be willing to recover the data from a failed node drive, etc.) – so remote backup is critically important, although if it’s primarily a dev machine, code repositories may mitigate this risk if they are suitably backed up themselves

Phase 5 – Phase 3 + Phase 4 + repository server
Supports offline work: No
Acceptable performance: Yes
Sustainable: Yes
Similar to deployment environment: Yes
Data safety: Depends on which environment is active. See Phase 3/Phase 4
Another point of consideration is that if either machine crashes and becomes inaccessible prior to migrating the work environment for active projects, only code pushed to the code repository can be retrieved immediately.

My wish list for Phase 6
Supports offline work: Yes
Acceptable performance: Yes
Sustainable: Yes
Similar to deployment environment: Yes
Data safety: Same as Phase 3, at least
Time to switch between online/offline: <= 1 hr
Bonus features: Automated synchronization, automated switching,  access to manuals/documentation for PHP/Apache/Drupal

With your help, I hope to get to phase 6. I’ll try to keep this post updated as my strategy develops. Thank you in advance!

P.S. This post is kind of hard to grok. Please leave suggestions for how I can make it easier to read. I want people to actually read it and comment. If they can’t, they won’t bother. Thanks!

Quick Drush Tip – Import database SQL with drush sql-cli (sqlc)

I discovered something awesome today just on a hunch and wanted to share. I’m not the first one to blog about this, but it isn’t widely mentioned on the ‘net, at least not as far as I can see.

Basically, it’s the drush sql-cli command, or drush sqlc for short. If you type just that, you will be logged into a database shell. Did you know, however, that you can actually import a database this way, much in the way you would with mysql database < database_file.sql? Swap out the mysql stuff and drop in drush sqlc < database_file.sql – and it works! It grabs the DB information from your settings.php file, and magic happens! This was an awesome find for me, and it’s going to save me a decent amount of time in the future.

NOTE: This might be MySQL-only. If a PostgreSQL user could chime in and let me know if it works for that or any other DBMSes, I’d appreciate it!

Drupal in 2020

This post is the expansion of a tongue-in-cheek IRC conversation I had the other day. I thought it’d be fun to blog about it because, while it’s mostly just humorous, it actually does cover some of the issues Web Developers encounter while developing in Drupal (or in many other platforms). So, walk with me if you will, into the year 2020 and how your Drupal project might unfold then…

*space-agey whooshing sound*

Vim Tips – Increase/decrease number under cursor

Dear coders,

You know how often I write new blog entries…like once every 3 nevers (and when I’m plugging the release of something I’ve developed, which happens about once every 3 nevers).

And yet, something I found today inspired me to tell the world about it. It may be something you have wanted to do on occasion. Yes, it is, as the title implies, the mere act of increasing or decreasing the number under the cursor in Vim.

How can this monumental task be accomplished? Why, it’s quite easy: <C-a> (increase) and <C-x> (decrease). The C means you hold down ctrl while you type the letter.

Perhaps a use case like mine will help you comprehend the sheer awesomeness of this in things like macros.

Basically, I started with a bunch of SQL ALTER statements. But for demonstration purposes, let’s…I don’t know, add increasing numbers to an array. Fifty times. The language here is PHP, by the way, but it doesn’t matter.

The old way:
// Add numbers to an array
$i = array();
$i[] = 1;
$i[] = 2;
$i[] = 3;

OK, now suspending your disbelief for a second and ignoring all the thoughts of, “Why doesn’t he just use a for loop?” think about how you would normally tackle something like this. Would it be…
yy – copy line
p – paste
/$i – find the next occurrence of $i
(once found) l (move right) (till you reach the number)
cw (change word)
(type new number)
c[ or ESC (escape insert mode)

Whoa! Imagine doing that 50 times.

However, now we know about <C-a> and <C-x>. So let’s make a macro! In this example, to save space, I’m going to separate the commands with “–” – so don’t type that.

Go to the first line of the whole thing, $i[] = 1. We know that we need to do this 49 more times. So we type:

qa — yy — p — f; — h —<C-a> — q

You can look up macros and these commands on your own. But basically, this starts a new macro in register a, where you record the keystrokes to copy the line, paste it on to a new one, jump the cursor to the semicolon, move it left by 1, and then increase the number. Once you’ve got this macro going, if you start with the $i[] = 1 line, you only need type 49@a (replace a with the register you used) and voilà – you’ll have the rest of your code written for you!

Now go and increase some numbers today! Preferably on better code than in the example here.

I’m an official Drupal contributor now – how did you become one?

Today I got my CVS account for Drupal.org. It’s really funny how it happened. I was planning to apply for one anyway in the near future because I’m working on releasing a complementary Drupal module for the Meetup API I developed. However, I actually just wound up patching the Fill PDF module with some extra functionality (as part of client work) and was asked if I could just commit the change myself. I couldn’t refuse something as cool as that, so I applied for a CVS account, and it was approved.

I’ll be committing that change soon, so if you are looking to fill in PDF forms with data from nodes and such, check out Fill PDF.

Now accidentally co-maintained by WizOne Solutions!

Furthermore, I intend to commit it with Git.

Do you have any fun stories of how you got your CVS account? Do share.

The new WizOneSolutions.com has launched!

I’m happy to announce the launch of the new design for WizOneSolutions.com.

After many months of anticipation, it’s here! There’s quite an interesting story behind this new design, too.

Around the end of summer last year, I asked my good friend Andrew Sepic over at Think Up! Design if he could help me create a better design for WizOneSolutions.com than the one you’ve seen for the last forever. He created what you see now. As time went by, in my spare I gradually put it together, implementing it as a Thematic child theme (thanks ThemeShaper), coding the CSS and ensuring it worked well in all modern browsers (sorry, IE 6). It’s such a relief to finally get this out of the way, and it’s a big step in terms of giving a real picture of what I can do. The old theme had the unfortunate effect of casting my programming skills in a mediocre light. I think this one does them some more justice 🙂

I’m not done yet, of course. I’ll keep improving the site, particularly the Portfolio area, so keep an eye out for more to come over the next few months.

Sincerely,
Kevin