Training, part 4: Live code demos

February 26th, 2011

You may think of live coding as a necessary evil: the thing you have to resort to when a connection fails and you can’t get at a running example, or when someone asks a question you’re not sure of the answer to and you have to try it out in real time. Admittedly, memories of coding in public (especially the connection-failure ones where you end up writing an entire Rails application on the spot…) are not always pleasant memories.

But I’d encourage you, as a trainer, to embrace live coding. Don’t wait for things to go wrong before you do it; weave it into your presentations and examples by design. You’ll find that the rewards are well worth the risks, and that the risks can be successfully managed without too much effort.

Live coding in presentations and Q&A

Live coding usually doesn’t happen in silence (except when something’s gone wrong). I use it in tandem with answering questions and presenting topics. I think with my fingers; I’m more comfortable and confident saying things about code while I’m writing code. Whether you’re that way in general or not, live coding can keep things lively in your classroom.

Sometimes the answer to a question doesn’t come to me unless I can do some typing; sometimes the answer comes to me but wouldn’t mean much or be clear enough without live code to illustrate it. I’ve never had the impression that anyone thinks less of me as a programmer or as a teacher because I’ve resorted to trying something out before answering a question. Furthermore, the technique of trying things out is, itself, an important one, and a good lesson for your students to learn. “Is Hash#update a destructive operation?” someone asks. “Let’s ask Ruby,” I reply, and then try it out on the screen in irb. (By the way, update is destructive. So is merge!. But merge isn’t, and there’s no update!. No wonder I have to keep checking.)

“Let’s ask Ruby” is a good slogan to keep in mind, and a good one to impart to your students. I’m always surprised when someone sends a question to a Ruby mailing list that could easily be answered by typing a few characters of code into irb. But it does happen; apparently the trying-it-out habit isn’t ingrained in everyone, and the classroom situation is a good place to try to instill it.

As for live demos during presentations: I show code snippets and examples on slides, but I often end up tabbing away from the slides into a console session and showing live examples of the language features and idioms that the slides describe. In this context I see live coding as a kind of bridge between static presentation and having the students themselves do some coding. Obviously having me code isn’t the same as having them code; but I believe that seeing me enter code on the screen encourages them to do the same, to an extent that seeing a static example doesn’t.

Managing risk with semi-live coding

There are certain things I don’t like doing from scratch live. Deployment with Capistrano is one of them. It never seems to go right all the way along. I’ll take the fall for this—I have no reason to assign blame to Capistrano—but I have to live with the fact that live Capistrano installaton and setup demos are, for me, risky.

So I do my live demo of Capistrano, but I do it with the help of a list of steps, written on a piece of paper, and I prepare a certain amount of it in advance. (It’s the permission settings and passwords that always get me.). I don’t hide the list of steps from the class; there’s no detrimental effect to their seeing it, and if they think my use of a script like this means that I’m inadequately skilled, then at least now they know!

Having the demo scripted reduces the risk, if not to zero then at least pretty close. Deciding when you need this kind of aid is up to you. For me it’s Capistrano; for you it might be something else. The main thing to remember is that the usefulness of a live demo for the students is that it’s dynamic and engaging. It doesn’t matter if you use a “cheat sheet”, any more than it matters whether a pianist plays by heart or from the music, as long as the performance is good.

The “cheat sheet” technique applies mainly to fairly complex code demos that you can predict and plan in advance (as opposed to those that you do spontaneously in response to a question, or as a sample illustration of a language feature). Losing your way in the demo isn’t the only risk of a lengthy demo, though; there’s also the lengthiness itself. You don’t want to get embroiled in irrelevant details, nor to start boring the class.

Managing lengthy demos with the cooking-show approach

I draw inspiration from television cooking shows, in the matter of dealing with code demos that might otherwise be too long or detailed. These shows often use a kind of time-lapse technique. The chef mixes some ingredients and gets everything to the point of putting it in the oven. Then, instead of waiting in real time for the cake to bake, the chef produces a finished cake that’s been baked beforehand. The lesson can then resume at the point of frosting the cake.

The cooking show approach can be very handy for in-class demos. It’s particularly suited to cases where you want to demonstrate how to do something relatively small, and then want to show how it fits into a finished application. Rather than have the students sit through the writing of the whole application, you can write the bit you’re talking about—a data validation directive in a Rails model file, perhaps—and then run the whole application and show that it fails when the data is invalid. Or you might walk through the creation of one class or module, but then have the others already prepared so that you can fast-forward.

On cooking shows, the fast-forward technique might apply to the overall process (baking a cake, say) or to a subprocess that doesn’t lend itself to real-time presentation, like waiting for dough to rise or chopping large quantities of onions. The TV chef might illustrate the chopping technique, but then pull a bowl of pre-chopped onions out of the refrigerator and thus condense the time-frame.

The same thing applies in training. The time lapse you incorporate may involve a whole application, or it may involve a repeated task at one level of abstraction, analogous to chopping onions. By all means do a live demo of adding a “has_many” association to one or two of your models. But if you’ve got a lot of such cases in your sample application, consider doing most of them in advance and leaving just those one or two to do as a live demo.

So keep your irb session open, even during your slide presentations, and think creatively about semi-live and cooking show-style demos. Live coding keeps things in the moment, and adds motion and interest to the features you’re trying to put across.

There’s no such thing as pure lecture in my technical classroom. From the very beginning I encourage students to open up program files and interactive interpreter sessions and play with them while I’m talking. Meanwhile I’m often bouncing from a bullet list on a slide, to a console where I demonstrate a coding technique in real time. And much of the time I’m not even presenting; I’m walking around the room helping people with their hands-on exercises.

In writing about modes of instruction—lecture, demo, hands-on or “lab” mode—I am drawing artificial boundaries. In the classroom it’s actually a blend. But bear with me; the artificial boundaries aren’t entirely artficial, and they allow for some salient and helpful points to be made.

In today’s post I’ll be talking mainly about lecture mode: the part where you’ve got the class’s attention and you’re sitting there saying stuff and showing slides and writing code in a file or console. In later posts we’ll look more closely at some specific points about code demos, as well as some ideas for making the most of hands-on exercise time. This time around we’ll focus on the verbal and its cognitive underpinnings.

The teacher’s “advantage”: the z-axis

The biggest problem people face when speaking to a class about a topic is trying to say too much. In fact, it’s largely for the purpose of isolated and tackling this problem that I’ve separated lecturing out as its own topic.

Consider a lecture snippet that consists of the following two points:

To create a Ruby hash, put the keys and values in curly braces.
You separate keys from values using the so-called “hashrocket”, '=>'.

Never mind for the moment what’s being projected on the screen (maybe bullet points, maybe a live code demo). The teacher wants to make those two points verbally about hashes.

Now here’s what happens: a feeling of guilt kicks in, a kind of full-disclosure compulsion. How (one’s teacherly conscience prompts one to ask oneself) can I mention curly braces as literal hash constructors, and not at least mention that they’re used for other things?

So now we’ve got two main points and an aside, with the main points flowing into each other and the aside understood to be in parentheses, so to speak:

To create a Ruby hash, put the keys and values in curly braces.
    (Curly braces are also used for code blocks, but that's a
    different usage.)
You separate keys from values using the so-called “hashrocket”, '=>'.

But you still feel like you haven’t done your pedagogical duty; or maybe that voice in your head is telling you that if you don’t say at least a little bit about operator overloading in general, as a kind of placeholder, your students will later come to feel that you omitted an important topic reference. So, for either nurturing or self-protective reasons, we get an aside inside the aside:

To create a Ruby hash, put the keys and values in curly braces.
    (Curly braces are also used for code blocks, but that's a
    different usage.
        (Lots of operators are overloaded in Ruby -- square
        brackets, for example.))
You separate keys from values using the so-called “hashrocket”, '=>'.

You know you’re digressing but you’re confident that the main topic is moving forward effectively and that the digressions will serve as useful placeholders for later discussion. You have no doubt that what you’ve done amounts to the utterance of two main points, with a bit of embellishment but clear and communicative nonetheless.

In other words, you perceive the asides as occupying a space somehow different from the space of the main points; you perceive them along a kind of z-axis, oblique to the main axis of exposition—something like the axis of depth shown here:

Digressions stand apart from main points in teacher's mind

The only problem is that what your students are actually hearing—what they, rather than you, perceive—is more like this:

Students hear topics and subtopics in a flattened way

The lesson? Have mercy on your students. They’re probably smart, but they don’t have the necessary experience in the topic to evaluate, as your verbal presentation unrolls before them, what’s a main point and what’s a digression. They don’t have the background, so to speak, to pick out the points in the foreground.

Sure, if you make an aside about Led Zeppelin or how to bake bagels, they’ll get that it’s an aside. But if you chase down every little technical opening or clue in your own speech, if you pepper your exposition of a subtopic with points from elsewhere in the general topic, it will only serve to confuse your listeners and add to their anxiety about mastering the subject matter.

When in doubt, lie

In the preface to The TeX Book, Donald Knuth has this to say about topic flow:
Another noteworthy characteristic of this manual is that it doesn’t always tell the truth. When certain concepts of TeX are introduced informally, general rules will be stated; afterwards you will find that the rules aren’t strictly true…. The author feels that this technique of deliberate lying will actually make it easier for you to learn the ideas. Once you understand a simple but false rule, it will not be hard to supplement that rule with its exceptions.

Lecturing without filling in all the details that you know are lurking in the topic feels like lying. It’s OK, though. In fact it’s your responsibility. You’re not really going to say everything, even if you try to cram some extra points in. All these digressions are no more than token efforts, when measured on the scale of the full complexity of your topic. So treat them very skeptically when they present themselves to your brain for delivery to the class.

I don’t mean that you have to become robotic or cleanse your speech of every molecule of outside reference. (Your students won’t let you do that anyway, once they start asking questions.) But try to shake the feeling that you have to cover the entire canvas with your brush on the first pass through. The learning canvas is random access; you can come back to things instantaneously at a later point when they fit in.

And remember that you are not being tested. A lecture is not an oral examination. Even in an oral exam you’d probably want to do more than just a brain dump; all the more should you pick and choose carefully what you say in lecture mode. No one is keeping score. They’re just trying to connect the dots and learn from you.

It’s tricky, of course, because subtopics do have circular dependencies and there are a lot of enticing sub-subtopics on almost any path through a topic. The art of lecturing on technical material (and lots of non-technical material, for that matter) is the art of presenting a non-linear topic in a linear way. Correspondingly, the discipline of lecturing is the discipline of not trying to say everything in the course of talking about any one thing.

Listening to yourself

When you’re talking to a class, you’re performing. I don’t mean you’re being a phony or putting on an act. I mean “performing” in a more technical sense. I’m a musician, so I understand this best in connection with music.

The most difficult and in some ways the most mysterious thing about musical performance is that when you’re performing, you’re also listening. It’s an instantaneous process: the listening part of you tells the playing part of you how things are going and what adjustments have to be made, yet somehow the adjustments aren’t exactly adjustments to anything because they precede the actual production. You can’t really listen to something you haven’t played yet, but that’s what it feels like.

Like I said, a mystery. Let’s leave it at this: performing means letting go but it also means carefully monitoring what you’re doing.

With experience, you learn to listen to yourself as you lecture. When I’m explaining something, part of my brain is creating and delivering the explanation. Another part is consuming it: I’m listening to myself and instinctively spotting the gaps, fuzzy spots, and glitches, hopefully before they happen. I’m also making quick, executive, performative decisions, perhaps literally as I’m drawing breath, as to what’s really relevant and what isn’t.

My impulses, for what it’s worth, tend fairly strongly toward the fill-in-every-detail direction; I need my inner musician (or my inner editor with a red pencil, if you like) guiding me and directing topic-traffic so that I can keep things moving forward. It works, too; but it’s something I had to become conscious of to master.

Presenting a topic with the right balance, the right arc, and the right (probably small) coefficient of digressions, means you’re truly in the teaching zone.

Next up: advice on code demos

For me, the most challenging thing about training is accomodating people who come to a class from different backgrounds and with different levels of experience and knowledge. So let’s dive right into that.

How different are different levels? At times, very.

I remember a Rails class whose members included a participant in that year’s “Rails Day” contest (i.e., a very experienced Rails developer), side-by-side with someone who had literally never written a computer program and never seen a command line. The latter person was a front-end designer, and I have every reason to think she was skilled and successful. But she did not belong in that class. (To be fair, the Rails Day guy probably didn’t either—except that he was actually there in part to help some of the other people in the class, who were his co-workers.)

Given a room with that diverse a population, what you can say? I mean that quite literally: what can you say? What sentence can come out of your mouth that’s going to make sense to everyone in the room and hold everyone’s attention? You don’t want to aim too high and confuse the less advanced people. But you don’t want to bore the more advanced students by aiming too low.

Managers sometimes err (quite understandably) on the side of sending more rather than fewer people to fixed-cost, on-site training courses; and even with public courses, self-selection based on advertised course requirements and content doesn’t necessarily serve to keep the group at one level. If you train, you’ll face this issue early and often.

Still, there’s a lot you can do to make the experience rewarding for everybody even in a very mixed-level group. Here are some suggestions—in no particular order except that the first one is the foundation on which all the rest, and indeed all of your training activities, must be built.

A. Deliver what you promised to deliver

This comes absolutely first and foremost. If the course was sold as a beginner’s course, then you have to provide a worthwhile course for beginners. If it’s an advanced course, you have to give the advanced students the best experience you can. Beyond this you can tweak and adjust; but delivering what you promised has to be the starting point.

Occasionally you’ll get a class where everyone is at the same level as each other and it’s not the level you advertised. That doesn’t happen often, but if it does, you can go ahead and recalibrate the whole curriculum. I did that once with a programming team who had much more experience than I’d been led to expect them to have. They were too polite to come out and say that they were too advanced for the curriculum, but it was pretty obvious. So we changed gears on day two and spent the remaining days on more advanced topics and code critique.

That’s a rarity, though. You’ll find the truly mixed-level class to be much more common, and whatever you do to optimize the experience for everyone, you must stay rooted in the curriculum you’ve promised to cover.

B. Make the prerequisites clear in advance

When you’re discussing the course skill-level with the client or prospective participants, show some tough love: be clear and firm about the prerequisites. It’s in everyone’s interest. Don’t try to be all things to all people. Use clear terms like “not suitable” or “too advanced” or “only right for beginners” if those terms apply.

Course prerequisites don’t have to be fine-grained, but they do have to be clear. “Everyone in the course should already be experienced with at least one object-oriented programming language” is an example of a reasonable prerequisite. So is “The participants need to be at least somewhat comfortable in a Unix-like environment.” You don’t need to know whether they know all the command-line options for grep. You just need to establish the basic requirements.

If you’re teaching a dependent technology—meaning, the way learning Rails depends on learning Ruby—address the role of the parent technology. You can teach Rails to people who know Ruby, or to people who don’t; but mixing Rubyists with non-Rubyists in a Rails class is problematic. Make sure you’re in accord with your clients about any assumed parent-technology baselines.

B. Try to control the teacher/student ratio

To the greatest extent that you can, keep the teacher/student ratio at 1/15 or better. Beyond about fifteen, in my experience, you hit critical mass: the amount of help needed by the students expands beyond the confines of the available time for one instructor, so you fall further and further behind.

This is ancient wisdom, of course. Universities like to brag about their tiny class sizes and all of that. And with training classes (as with universities, incidentally), you can’t always control it. But if you’re presented with, say, the prospect of providing hands-on, closely-coached technical training for twenty-five or thirty people, see if you can find another instructor—or at least a lab assistant who can troubleshoot software installation and other matters so that you’re not doing everything.

D. Have extra material in reserve

Try to have some ideas for extra projects up your sleeve, above and beyond any in your stated curriculum, particularly for students who are too advanced for the course. Bring a few books with you, and if someone isn’t feeling challenged enough, point them to a chapter that you think they can learn from. They won’t feel shunned; they’ll feel relieved. The whole problem is that the too-advanced person shouldn’t be there in the first place, so anything you do to help them not be bored is in their interest.

E. Enlist the help of the more advanced students

I’m thinking of situations where, say, one student is clearly over-qualified for the exercises in the workbook, and another student is struggling. It’s possible that the best course of action is for the over-qualified student to pair up with the struggling one and help them out.

This is a tricky strategy, though. It’s your job, not theirs, to do the teaching. The situation has to be really right before you ask students to teach each other.

But it often is really right. For one thing, teaching something helps you consolidate your own knowledge—so everybody wins, including the student-teacher. And some people would rather spend the time engaged in an activity with someone else than go off and work on an application or read a book.

As the teacher, though, you have to make sure that the result is at least as great as the sum of its parts: all parties involved have to understand and accept the plan. You don’t want it to backfire and have either student (or both) think you’re trying to brush them aside. Be circumspect about this option, but keep it in mind for the right situation.

F. Master the art of not answering questions

Don’t get me wrong: in general it’s good to answer questions. But when questions come from students who probably shouldn’t have taken the course in the first place, and are shifting the focus onto material that’s either too simple or too advanced, it’s your job to protect the class.

When I get a too-advanced question, I usually answer it quickly and, if need be, incompletely. I don’t want to digress too far from the curriculum—and above all I don’t want to make the less advanced students feel anxious because they can’t follow what I’m saying. You can always ask the advanced student to talk to you about their question later; but you only have one shot at making the dynamics of the classroom work for everyone. So answer the question quickly; offer to go into it privately later; don’t get into a lot of examples and demos based on the question (that can really make the other students feel abandoned); and move on.

Too-basic questions can be harder to deal with than too-advanced questions. In fact, it may be through such questions that you first discover that some of your students are under-prepared. This may be a good time to consider a temporary ad hoc student-teacher system (see E., above) where someone else in the class helps the person, assuming it’s something that can be communicated relatively quickly (like how to start Ruby from the commandline, creating a MySQL database from scratch, etc.).

G. Make it easy to move around the materials (e.g., staged code shapshots)

For the main do-as-you-go application in my Rails workbook, I’ve got twelve code snapshots. Each one represents the state of the application at a particular point in the book. If a student falls a bit behind, or wants to skip a section they already know about, they can move to a later chapter or section and “fast-forward” by swapping the appropriate code snapshot into their working code directory.

Furthermore, if I need to fast-forward through a topic, I can then pick up from a particular code snapshot and keep going. I might do this with a couple of topics if, say, a client has asked me to deliver a four-day course in three days—or if I’ve just fallen behind a bit and want to get the class back in sync with the courseware. (I don’t make a habit of falling behind but I do try to provide enough material that compressing one or two subtopics won’t be a tragedy.)

Staged code snapshots aren’t the easiest thing in the world to maintain, but they’re a good example of adding an element of independence to the classroom experience for students who want to adjust the pace.

H. Talk to the person in charge

I’m thinking here mainly of private, on-site training engagements (though the principle could be extended to on-going discussions with individual students too). You’ll probably have a discussion at the beginning and/or end of each day with the manager who set up the training. You should definitely bring any issues about class preparedness level to that person’s attention.

When you do, you’ll find that ninety-nine percent of the time the manager will say something like, “Yeah, I was afraid that it would be too easy for Julie” or “Bill said he’d learn Ruby on his own over the weekend but I guess he didn’t.” The manager knows the team. None of what you say is going to be a huge surprise.

Be sympathetic, though. The client didn’t do this to make your life hard. At worst, they just didn’t think it through in terms of preparation and were eager to get the most value out of your skills. Everyone’s acting in good faith.

Sometimes, the manager will take someone out of the training after the first day. I always feel a pang of guilt at this—but I shouldn’t, and you shouldn’t. It’s a correction that will make things easier and more productive for everyone, including the person who shouldn’t have been in the training in the first place. Of course you want to challenge yourself to make the experience accessible to and meaningful for as many people as you can. But don’t be Utopian about it; there really is such a thing as a person for whom a given class at a given time just isn’t right.

Summary

Here’s a summary of the suggestions we’ve just gone over:

  • A. Deliver what you promised to deliver
  • B. Make the prerequisites clear in advance
  • C. Try to control the teacher/student ratio
  • D. Have extra material in reserve
  • E. Enlist the help of the more advanced students
  • F. Master the art of not answering questions
  • G. Make it easy to move around the materials (e.g., staged code shapshots)
  • H. Talk to the person in charge

Handling the mixed-level classroom successfully is not easy. You need to stay alert and to keep applying energy to the situation to make it as good as it can be for everyone, while delivering what you promised to deliver. A mixed-level group requires agility and adaptability, but with structure.

Some of the training companies I subcontract with do Likert scale evaluations (“Strongly agree, Agree, Neutral…”—that kind of question). One of the questions is often about the pace of the class: much too slow, too slow, perfect, etc. In a mixed-level class, I don’t expect everyone to say the pace was perfect. I aim, though, for the mathematically best result possible: I want the curve to max out at “perfect” and fall away (hopefully not too far) to the sides.

Then I know I’ve done my best.

Training: a series introduction

January 24th, 2011

This introduction is the first—number zero, if you like—in a series of articles about technical training, intended to be read by trainers but of interest, I hope, to a variety of teachers, managers, and interested learners from various backgrounds. After this introduction I am planning at least three further articles, addressing such topics as how to handle classes with mixed levels of experience and what’s involved in choosing among different teaching modes (lecture, hands-on, etc.). After that, we’ll see.

Not all teaching is training. But as far as I’m concerned, all training is teaching; and teaching is a fascinating, challenging, absorbing art. I’m not going to philosophize at any length about the terminology. I just want to make it clear that in this series about training, I consider myself to be addressing a branch or style or permutation of teaching, with all that that implies.

I’ve taught a lot and I’ve been at it for a long time. From 1992 to 2005 I was on the faculty of the Department of Communication at Seton Hall University, teaching media history and research to undergraduates. Meanwhile I’d been programming computers as a hobby since 1972 (with some gaps, but pretty steadily since 1990), and I’d become a Ruby and Ruby on Rails expert.

By mid-2005, my academic career and my supposed hobby were on a collision course. I had a year-long sabbatical coming up, with the expectation that I would write an academic book; but that summer I signed a (non-academic) book contract with Manning for Ruby for Rails.

The timing was favorable for a change. Ruby beckoned; and with a sabbatical scheduled I wasn’t expected to be in the classroom anyway. So I changed careers: I resigned from Seton Hall, instead of taking the sabbatical, and started to earn my living as a Ruby consultant, author, and trainer.

I figured I’d finish Ruby for Rails and then get a programming job. I did finish the book, but instead of getting a job I set up a one-man consultancy, Ruby Power and Light, and started taking on short-term contracts—and a lot of training jobs. I trained and trained. In 2006, I traveled to something like twenty-three cities, from California to Sweden, training people in Ruby and Rails.

I continued to make my living mainly from Ruby and Rails training through most of 2009, at which point I started working full-time as a developer at Cyrus Innovation. I’m still involved in training projects, though, especially (though not exclusively) a recurring training event called The Compleat Rubyist. Teaching isn’t my main bread-and-butter at the moment, but it is a part of me and always will be.

I hope you enjoy the series.

Next up: Handling mixed levels of experience in a training class

AccrediDation

March 23rd, 2010

I’ve noticed that people routinely pronounce “accreditation” as if the first ‘t’ were a ‘d’: accredidation. I’ve been wondering why, and I have a theory.

First, consider that ‘t’ often becomes a ‘d’ sound before ‘ed’. “faded” and “fated” can sound very similar. “imitated” is pronounced more like “imitaded”.

Now, it’s also true that while “imitated” sounds like “imitaded”, no one says “imiDation” instead of “imitation”. Nor does anyone say “visiDation”, “mediDation”, or “cogiDation”. So why “accrediDation”?

The reason, I believe, lies in the past tense form of the word: accredited. In that word, the ‘t’ after the ‘d’ sounds like a ‘d’ (like “imitaDed” and so on). I surmise, therefore, that something along the following lines happens when people pronounce “accreditation”: The brain gets to that first ‘t’ after the ‘d’ and, out of habit born of d-ifying the ‘t’ in “accredited”, pronounces it as a ‘d’.

What the brain doesn’t quite take into account is that there’s a syllable “missing”. If the past tense were “accreditated” instead of “accredited”, then no one’s brain would ever have thought that there were two ‘d’ sounds in a row, and no one would say “accredidation”.

It’s the only theory I can think of that explains why this word alone, among all the -itation words in the language, gets pronounced this way.

Starting a new job in December

November 13th, 2009

I am very pleased and excited to announce that I have accepted a Senior Developer position with Cyrus Innovation, Inc. Cyrus is based in New York City. I will be, at least for the foreseeable future, assigned to a team working on-site at a New Jersey client. It’s a work-place I’ve been in before (I’ve done training for them), and I know some of the other members of the Cyrus team who work there. So while it’s definitely a big change for me and a new adventure, it’s also a familiar and collegial environment that I already know I like working in.

And it really is a big change! The last full-time job I had was my professorship at Seton Hall University (1992-2005).

The question is (drumroll…) why now?

Throughout the years that I’ve been doing freelance and independent consulting and training, I’ve regarded the prospect of a fulltime job with ambivalence. On the one hand, it’s less independent. And much of the brainstorming I’ve done this year about whether or not to seek a fulltime job has been kind of depressing, because it’s been motivated largely by the fact that my freelance business has dropped off a great deal (and I have no marketing skills, which means that when the market gets tight, I tend not to remain competitive). I’ve also been conflicted about fulltime jobs because I am very settled where I live and do not want to move.

On the other hand, I’ve always understood that a fulltime job would provide a measure of continuity and security that I’m increasingly feeling the lack of in my independent work. And, even more importantly, there’s the sense of belonging to a team of colleagues. I’ve always looked with a pang of envy at friends who are part of a development team, and whenever I’ve spent even a couple of weeks on a team helping out, it’s been incredibly stimulating. I always go through a big learning spurt when I work directly with other developers, and I don’t do nearly enough of it.

So I’d reached the point where I was interested in a full-time job but, fussy customer that I am, it had to be one that didn’t require me to sell my house and move, and that I had very, very good reason to believe would provide me with the kind of collegial environment that had been, for four years, the thing I had pined for the most as an independent. (I also didn’t want to telecommute, because sitting alone in my house literally all the time is not the right formula for me.)

Well, fast forward a bit and here I am, having found what I was hoping to find! That’s the story. I start December 7. (And all the “date that will live in infamy” jokes have already been made :-) Wish me luck!

First things first:

In case you haven’t heard about it, I’m very excited to report that I am teaming up with two other Ruby programmer-authors, namely:

to present The Compleat Rubyist, a two-day Ruby training event in Tampa, Florida, January 22-23 2010.

The idea behind the event

It all started with the books.

We got the idea of doing some kind of joint project because our books (including the two above, plus my book, The Well-Grounded Rubyist) complement each other really nicely. My book is a language tutorial. Jeremy’s book (to which both Greg and I contributed) contains advice about using Ruby in a series of application contexts. Greg’s book makes a different kind of pass through the language, with an eye to idiomatic, productive techniques.

A training event seemed like the perfect collaborative effort. We’ve designed an unusual format, optimized for in-depth learning and for a workshop/hands-on style.

Who’s it for?

I’ve been training Ruby programmers for years, and I can tell you that it’s very common to become quite good at Ruby but still have room for getting deeper into how things work, what the best practices are, and other areas.

I’d say that’s the “sweet spot” for our attendees: people who have been using Ruby, and want to go further in their understanding and skills.

Does that mean intermediate? advanced? talented beginner?

Hard to say. I’d like to think that almost any Ruby programmer can get something out of spending two days with us. (And we’re hoping to get a lot out of it too.) We’re not that concerned with pinpointing a level. Have a look at the event description, and decide whether it sounds good for you.

See you there, we hope!

We’re happy to field questions, if you have any. There’s a contact link on the event website, as well as links for registering and for more info about the venue.

I’ve watched no more of the Sotomayor hearings than has happened to be on while I’ve waited for the guy behind the counter to toast my bagel, and things like that. I don’t see much point in watching them, since it’s pretty easy to predict what her critics are going to ask her and say about her, and not terribly interesting to hear her answers.

But I do want to say something about this “wise Latina” thing, if I can do so without boring myself as well as you to death.

With very few exceptions, all Supreme Court justices, ever, have been white men. So have most other judges in the U.S. That means that someone, somewhere along the line, felt that white men make wiser decisions than people who are not white men. Maybe closer to “everyone” than “someone”, in fact.

White male jurists never have to say anything public to the effect that white males as wiser, as jurists, than people who aren’t white males, because it’s been said for them. It’s been said by virtually every President who has made judicial appointments and nominations, every Senator on whom the strangely homogenous pattern has not weighed heavily, and every citizen who never considered withholding a vote from the perpetrators of this centuries-long exercise in exclusion.

In short, the entire history of the Supreme Court and much of the rest of the judiciary amounts to a sustained assertion that white men make wiser decisions than anyone else.

So along came Sotomayor, and expressed a different opinion. She expressed an opinion that was not the opinion on which the entire history of the Supreme Court has been predicated. She espoused the belief that white men do not, in every imaginable case, make wiser judges.

Well!

How dare she?!

Doesn’t she realize that The Universal Opinion on this subject has already been established?

Of course it’s the same old thing. The belief that white males are wiser is so widespread, so ingrained, so taken for granted, that it seems natural. You don’t have to think about it; your thinking has been done for you. And you don’t have to be so gauche as to say that you believe it, because as long as you don’t saying anything, it will be assumed that that’s what you think.

All Sotomayor did was to respond. She was responding to history. History was saying—loudly, repeatedly, in chorus echoing down the centuries—that white men make wiser jurists. Sotomayor said: maybe not, under some circumstances.

That’s all.

Think of it this way. Sotomayor walks down the street every day, her whole life, and every couple of blocks, somebody says to her: White male jurists make wiser decisions than anyone else. Senators say it; Supreme Court justices say it. Citizens say it; Presidents say it.

After a lifetime of that, Sotomayor says: well, not necessarily.

And everyone gets mad at her.

The every-couple-of-blocks thing represents about one millionth of one percent of what Sotomayor, and the rest of us, have actually had communicated to us over our lifetimes. So why the hell shouldn’t she respond? And why are people treating her like Oliver Twist asking for more gruel?

My new Ruby book is out!

June 10th, 2009

I’m realizing that the new book isn’t getting enough buzz, so here’s some buzz!

My new book, The Well-Grounded Rubyist, is now out and available from the publisher as well as Amazon and other retailers and stores.

If you’re learning Ruby, or want to learn Ruby, or want to refresh your Ruby knowledge and get more deeply into it…read this book! I talk more about the book in my recent interviews for InfoQ, On Ruby, and RubyLearning.

Some reviews and comments

Here are some review quotations, from various sources:

I think this book is a definite read and should be in every Ruby developer’s library.
...
Excellent. Easy to read, but not dumbed down. I came away with a much deeper understanding of WHY oop is used, and how to use it in ruby.

If you are looking to understand ruby, look no further.
...
David does an excellent job going beyond the language and hitting those concepts in the built-in classes and modules that you need to know and will experience in the real-world.

You can also find complete reviews here and here.

(And don’t get confused if some sites have a different-looking cover. There were two cover designs. The new one is the one you see here.)

Enjoy!

It’s been a busy few days, with the release of not my Ruby 1.9 Envycasts but also the PDF version of my new book The Well-Grounded Rubyist.

TWGR is an expanded, updated, Ruby-only reworking of my 2006 book “Ruby for Rails”. It targets Ruby 1.9.1, and includes a great deal of new material (enough that it took me almost a year longer than I thought it would to write :-) The book is entirely about the Ruby language, not Rails. Lots of readers of R4R encouraged me to write a “just-Ruby” book, and here it is!

I’m looking forward to the release of the paper version on May 1, too. Not sure yet whether there are Kindle and/or Sony e-reader versions coming, but I’ll keep you posted.

Here’s a passage from The Man in Lower Ten by Mary Roberts Rinehart, published in 1906. I’ve included some context but the main thing I’m interested in is the appearance of the word “cool” in the second paragraph.

“Nonsense,” he said. “Bring yourself. The lady that keeps my boarding-house is calling to me to insist. You remember Dorothy, don’t you, Dorothy Browne? She says unless you have lost your figure you can wear my clothes all right. All you need here is a bathing suit for daytime and a dinner coat for evening.”

“It sounds cool,” I temporized. “If you are sure I won’t put you out—very well, Sam, since you and your wife are good enough. I have a couple of days free. Give my love to Dorothy until I can do it myself.”

I can’t see what “cool” means in the second paragraph, other than “cool” in the slang sense that we use it. My understanding is that “cool” in that sense started, or at least came into common usage, during or after World War II. In any case, 1906 seems insanely early for it.

But what else could it mean in the quotation above? The wardrobe described in the first paragraph doesn’t suggest a particularly cool climate. Is there some other nuance of the word I’m not getting?

I shall leave comments open on this one, at least until the spam gets intolerable.

The answer is…yes! I did mention it. But I’ll mention it again.

Want to learn Ruby, and learn it right?

Come to Atlanta for three days and learn Ruby from:

  • me (author of Ruby for Rails, The Well-Grounded Rubyist, and other stuff; long-time Ruby programmer; one of the most experienced Ruby trainers on the planet)
  • Jeremy McAnally (“mrneighborly”, author of Ruby in Practice, creator of the Ruby Hoedown (annual conference))
  • Rick Olson (“technoweenie”, member of the Rails core team; plugin writer extraordinaire)

You gotta better way to learn Ruby?

I doubt it. Just read that list of instructors again… and you get training materials, a book (“Ruby in Practice”), and lunches.

There’s registration info here, and you can contact me directly with any questions.

Hope to see you there!

P.S. If you’re a Ruby expert but have friends or co-workers or employees who could use an accelerated intro/intermediate course, send them along!

Want to learn Ruby, or improve what you already know? Come to Atlanta!

Ruby Power and Light and ENTP are teaming up to present a three-day Ruby course in Atlanta. You can get more info, and register, here.

Training will be by me and Ruby developer/author Jeremy McAnally (“mrneighborly”). And Rick Olson (“technoweenie”) will be there too, helping with the training and sagely dispensing Ruby wisdom and advice. (Seriously!) It will be at the Georgia Tech Hotel & Conference Center.

Please email me if you have any questions. Otherwise, see you there!

I hate it when athletes thank God when they win. My reasons for hating it have nothing to do with my own atheism. I hate it because it’s narcissistic and because it’s theologically infantile.

If you win a game and then thank God, and do not thank God when you lose, you are going on record as believing that God wanted you to win, and that a victory by your opponent would have represented a thwarting of God’s plan.

But how do you know? Isn’t it possible that losing is what God has planned for you, and that it will do you good? Maybe losing will strengthen your character. Maybe your opponent needs the win (or the prize money) more than you do, and God somehow managed to figure that out in spite of being dazzled by your greatness. Maybe you should be thanking God for protecting you from the sin of pride by not letting you win a spiritually meaningless, entirely earthly contest.

But I’ve never seen an athlete drop to his or her knees and thank God after a loss. Why not? Because the ones who thank God when they win have a dinky, anthropomorphic conception of God. Their God is “the man upstairs,” the Santa Claus figure, the parent who may or may not give them the birthday present they want. And to hell with the other kids. Me, Me, Me.

So what gives? Where does this all come from? Whose big idea was it to thank God only for bringing about what they themselves wanted to happen anyway?

Let’s go back to ancient times. Things were different with respect to thanking gods, because there were lots of gods and the gods took sides in the contest. It made sense for the Greeks to thank Athena for the victory over the Trojans because Athena was, at some Olympian level, duking it out with Ares and Aphrodite. The Greeks’ powerful friends prevailed over the Trojans’ powerful friends. And the Greeks understood that someone had actually made an effort on their behalf, faced uncertainty, and prevailed. So they thanked her.

Dear athlete: Do you think that God faces uncertainty when you play a tennis match?

Do you think that God has to make an effort on your behalf to make sure you win?

Do you think that God’s enemy is rooting for your opponent?

And if you don’t think all that, what exactly are you thanking God for when you win? I mean exactly. Not just vaguely that you’re happy, and happiness feels good, so it must come from God. That’s theological babytalk.

The best thing that can be said about thanking God for an athletic victory and not for a loss is that it’s an ignorant corruption of what was a perfectly reasonable pagan practice. If you’re a monotheist and thank God for a win, you’re making a statement about your own inherent worth, and what you believe is God’s opinion of that worth, in comparison to the inherent worth of your opponent. You’re asserting that your victory is of the Lord to an extent that a victory by your opponent would not have been. And you’re implying unmistakeably that your opponent is in league with God’s enemy.

In other words, thanking God for an athletic victory is stupid, uninformed, thoughtless, self-absorbed, and about as far from anything religious or spiritual as you can get. I understand the whole thing about religion not being the same as rational thought. But this isn’t even the same as religious thought. It’s just vanity.

Registration is now open for RailsConf 2009 (May 4-7). You can get more details, and register, at the RailsConf 2009 website.

RailsConf is taking place in Las Vegas, one of my favorite cities. Yes, I know what a weird and ironic place it is. But for whatever reason, I’ve always found it extremely enjoyable. May is a good time to go—hopefully not to hot to step outside!

There’s a lot going on at RailsConf this year, highlighted by its timing in the wake of the Rails/Merb merger decision. There will be lots of merger news and highlights, along with the usual great lineup of talks and, above all, the chance to meet and get to know other Rails developers as well as Rails core team members, authors, bloggers, and pretty much the whole gang!

A hiatus year for RailsConf Europe

Ruby Central and O’Reilly have decided to take a hiatus from producing RailsConf Europe this year, for the simple reason that it didn’t bring in enough revenue last year to justify doing it again, particularly given the tight economy and the need to err on the side of caution. RailsConf Europe has always been a really great event, and people who go to it really love it, but we need a year of retrenchment while we figure out how to get everyone else to realize how great it is! Plans for 2010 are not certain yet; we’re taking it one year at a time.

Meanwhile, the Ruby and Rails communities continue to produce an astonishing number of high-quality, uniquely branded and flavored events. I’m not even going to try to list them all here. Do a search, though, and you may very well find one near you.

I’m happy to see that my recent 10 things to be aware of in moving to Ruby 19 article has proven helpful to lots of people. This article is a follow-up.

The goal of the article was to point out 1.9 features and changes that might cause your existing code not to run correctly, or not to run at all. I went a bit soft, though: two of the original ten (hashes being ordered and the changes in method-argument syntax) weren’t really things that might break your 1.8 code.

So I feel I owe the world two more code-breaking 1.9 features! And they’re here, along with a bonus one.

But first, some links

The denizens of ruby-talk have provided lots of helpful ideas and feedback. James Edward Gray II and others mentioned M17N, a topic on which I defer to the more expert among us, especially James who has written a multi-part M17N guide. He’s going to be expanding it to include 1.9 encoding, so keep an eye on it.

Brian Candler suggested that people might be interested in the presentation by me and Dave Thomas at RubyConf 2008 on Ruby 1.9: What to Expect. We cover some pitfalls but also some new, non-pitfall features you might want to know about.

If you’re interested in Ruby 1.9 generally, you might be interested in my forthcoming book The Well-Grounded Rubyist, which is a fully revised, revamped, “Ruby only” second incarnation of my 2006 book Ruby for Rails.

Apologies to anyone I’ve failed to credit, and thanks to all for the feedback.

And with that, here are the pitfalls! (Speaking of pitfalls, I think I’ve remembered all the <pre> tags this time….)

String indexing behavior has changed

(Thanks to Michael Fellinger and Robert Dober)

In Ruby 1.8, indexing strings with [], as in "string"[3], gives you an ASCII code:

  >> "string"[3]
  => 105

In order to get a one-character-long substring, you have to provide a length:

  >> "string"[3,1]
  => "i" 

In Ruby 1.9, the indexing operation gives you a character.

  >> "string"[3]
  => "i" 

Also, kind of along the same lines, the ?-notation now gives a character rather than a code. In 1.8:

  >> ?a
  => 97

and in 1.9:

  >> ?a
  => "a" 

if-conditions can no longer end with a colon

In 1.8 you can do this:

  if x:
    puts "Yes!" 
  end

In 1.9, you can’t use that colon any more. The same is true of when clauses in case statements. This will not parse in 1.9:

  case x
  when 1: "yes!" 
  end

Bonus thing! No more default to_a

In 1.9 you cannot assume that every object has a to_a method. You’ve probably seen warnings about this in 1.8, and the day of reckoning has now arrived.

  >> "abc".to_a
  NoMethodError: undefined method `to_a' for "abc":String

You can use the Array method to turn anything into an array. If it’s an array already, it returns the object itself (not a copy). If it’s anything else, it tries to run to_ary and to_a on it (in that order), and if those aren’t available, it just wraps it in an array.

Array isn’t new, but we’re likely to be using it a lot more now that there’s no default to_a operation.

Have fun!

Update: There’s a sequel to this post, called Son of 10 things…

I’ve been writing a lot about Ruby 1.9 (my book The Well-Grounded Rubyist is due out in a couple of months), and I thought I’d share my personal list of things you need to be careful of as you go from 1.8 to 1.9. This is not a list of changes; it’s a list of changes that you really need to know about to get your 1.8 code to work in 1.9, things that have a relatively high likelihood of biting you if you don’t know about them.

Strings are no longer enumerable

You can’t do string.each and friends any more. This has an impact, for example, on the Rack interface, where there has in the past been a requirement that the third item in the returned array respond to each.

Block argument semantics

This is a big change, and a big topic. The salient point is that when you do this:

  array.each {|x| ... }

the block parameter list is handled like a method parameter list. In 1.8, blocks use assignment semantics, so that @ is like @x=. That’s why in 1.8 you can do:

  array.each {|@x| ... }

(assign to an instance variable) or even:

  array.each {|self.attr| ... }

(call the attr= method on self). You can’t do those things in 1.9; the parameters are bound to the arguments using method-argument semantics, not assignment semantics.

Block variables scope

Block parameters are local to the block.

  x = 1
  [2,3].each {|x|  }

In 1.8, x would now be 3 (outside the block). In 1.9 the two x’s are not the same variable, so the original x is still 1.

However, a variable that (a) already exists, and (b) is not a block parameter, is not local to the block.

  x = 1
  [2,3].each {|y| x = y }
x is now 3. If you want or need to shield your existing variables from being used inside the block, declare variables as block local by putting them after a semi-colon in the parameter list:
  x = 1
  [2,3].each {|y;x| x = y }

x is still 1.

Method argument semantics

Method arguments do some new things too. In particular, you can now put required arguments after the optional argument glob parameter:

  def my_meth(a,*b,c)

There aren’t too many situations where you’d want to do this (though there are one or two).

The * operator has changed semantics

Compare 1.8:
  >> a = [1,2]
  => [1, 2]
  >> *b = a
  => [[1, 2]]
  >> b
  => [[1, 2]]

and 1.9:

  >> a = [1,2]
  => [1, 2]
  >> *b = a
  => [1, 2]
  >> b
  => [1, 2]

I’ve always interpreted the * operator in the following way:

The expression *x represents the contents of the array x, as a
list.

In 1.8, *b = [1,2] means that [1,2] is the contents of the array b, which means that b is [[1,2]]. The 1.9 semantics don’t seem to behave that way. I’m not sure what the new general rule for * is, or whether maybe I was wrong that there was such a rule that governed all cases (though I can’t think of an exception).

Hashes are ordered

This isn’t likely to bite you but it’s something to be aware of, both in your own code and in looking at the code of others. Hashes are ordered by insertion order. Reassigning to a key does not change the insertion placement of that key.

method and friends return symbols

Expressions like obj.methods and klass.instance_methods return symbols instead of strings in 1.9. That means that you might have to do to_s operations on them, if you need them as strings. However…

Symbols are string-like

... symbols have become very string-like. You can match them against regular expressions, run methods like #upcase and #swapcase on them, and ask them their size (i.e., their size in characters). I’m not sure what the purpose of this is. I’d just as soon have symbols not be any more string-like than they absolutely have to be.

Gems are automatically in the load path

When you start Ruby (or irb), your load path ($:) will include the necessary directories for all the gems on your system. That means you can just require things, without having to require rubygems first. You can manipulate the load path per gem version with the gem method.

Lots of enumerable methods return enumerators

Called without a block, most enumerable methods now return an enumerator. It’s fairly unusual to use the return value of blockless calls to map, select, and others, but it’s worth knowing that now you cannot assume that, for example, Array#each will always return its receiver.

You can use this feature to chain enumerators, though the circumstances in which chaining enumerators really buys you anything are pretty few. I don’t know of a case where you would do this:

  array.map.other_method { ... }

with the exception of map.with_index. The map call is essentially a pass-through filter here. (This was not true in early versions of 1.9, where you could attach knowledge of a block to a chained enumerator, but that behavior was removed.)

Incidentally, you win the prize (which is endless glory :-) if you can account for the difference between these two snippets:

  >> {1 => 2}.select {|x,y| x }
  => {1=>2}
  >> {1 => 2}.select.select {|x,y| x }
  => [[1, 2]]

It’s all about enumerators….

If you’re careful about these changes, and keep an eye out for others, you should be able to continue to have fun with Ruby in version 1.9 and beyond!

Announcing the opening of WishSight!

WishSight is for managing wishlists and gift-giving. It lets you see who’s given (or promised) what to whom, and it lets gift-givers for particular people communicate with each other, via a comment-board, so that they don’t duplicate gifts.

It’s based on a Christmas-list application I wrote in 2005 that my family and friends have been using every year since then. It’s completely merchant-unaffiliated. You can post links for the gifts you want, and they can be links to any merchant.

WishSight helps you cut down on gift duplication, and increases the chances that people will get things they actually want, without the gift-givers having to do a round-robin of email or phone calls to pin down who’s buying what. And chances are they don’t all know each other anyway—which doesn’t matter on WishSight, because you all communicate by leaving comments directly on your mutual friend’s wishlist.

All you have to do is:

  • sign up
  • list the email addresses of people who you want to be able to see your wishlist
  • get those people to sign up and “whitelist” your email address
  • list your wishes
  • stake “claims” on other people’s wishes

There’s no stealth: the email addresses are only used internally to determine who’s allowed to see whose wishlist. Also, you can list email addresses even if the people haven’t signed up yet. Once they do sign up, they will automatically have permission to see your wishlist and claim your wishes. No two-sided “handshakes” required; you just whitelist people.

Have fun, and let me know if any questions or problems!

Don’t get me wrong. I’m not saying that other forms of hate and prejudice are extinct, or even on the wane. But it feels like the stars anti-Muslim sentiment and homophobia are in the ascendancy.

It’s very much about statements that don’t sound aggressive or hateful, on the surface, but that would never be made if hate didn’t lurk just below. I’m thinking, for example, of a report I heard on the radio of some attack or other, involving “three Muslims of middle-eastern descent.” I might have the phrasing of the “middle-eastern descent” part wrong (though it was that or close to it). In any case, the salient bit, for me, was “three Muslims.”

When was the last time you heard a crime described as having been committed by “three Christians”? How about “A Jew broke into a convenience store…”? So what’s up with “three Muslims”?

What’s up, of course, is hate. I don’t think the radio announcer or the newswriter hates Muslims. But they do operate under a compulsion to mention explicitly that Muslims are Muslims, and ultimately that’s so that the listenership can be put on alert to hate them. Does the phrase “three Muslims” have explanatory power? Did these people do whatever they did because they are Muslims? No. There’s no reason to mention their religion except out of habit of mentioning the fact that Muslims are Muslims.

Back when I was a university professor (1992-2005; in this case somewhere around 2003, I think), the school newspaper had a kind of “person-in-the-street” feature, where they’d ask a few people around campus a question and print selected answers. One week, the question was something about Iraq. One of the people quoted in the feature said something along the lines of, “Bomb them all off the face of the earth.” Or “Blow them all up”—words to that effect.

My response was to call the editor-in-chief of the newspaper into my office and have a little chat with him. I was under no institutional imperative to do so—I was not involved with the paper directly—but it seemed to me that I had an opportunity to teach him perhaps the most important lesson of his college career. “If the question of the week had been about how to improve the cafeteria food,” I asked him, “and someone had said, ‘Line the whole cafeteria staff against the wall and shoot them dead,’ would you have printed it?”

Of course he would not have, and said that he would not have. “The fact that what we would not say about the cafeteria workers, we would say about the entire population of a Muslim country,” I explained, “is the dehumanization process at work.” I do believe he understood and took my point on board.

So we mention that people are Muslims, and we lower the bar when it comes to suggesting (or, if you like, joking about) their violent deaths. And it’s all very dangerous and should be sending up serious alarms.

Labeling the gay as gay is an even more popular pastime. The world has settled for a breathtakingly stunted view of what homosexuality entails, and how it manifests itself. It manifests itself, by the way, as itself, not as an obsession with the song “YMCA” or an expertise in designer footware. Hey, more power to you if you have that expertise. But the set of all men who do intersects in a miniscule subset with the set of all men whose primary sexual orientation is toward men. Ditto for all the stereotypes.

Of course, the world can’t deal with the idea that homosexuality manifests itself only as itself, because if that’s true, it means you can’t tell who’s gay; and that, like being unable to tell who’s Jewish, is unacceptable. The workaround is to pretend that you can tell who’s gay, resorting to babytalk about your “gaydar” when the stereotypes, as they must, fail you.

And then, following a fairly tight train of thought, there’s hatred of gays.

First of all, let me explain that I include, as hatred, the “love the sinner, hate the sin” horseshit espoused by the Catholic church. It is, to be sure, a kinder, gentler hatred than the burning-at-the-stake kind. The idea is that you’re enlightened enough to acknowledge that some people just are gay. But you also understand that, as gays, they must never indulge in the kinds of sexual activities they feel interested in. So you, as the compassionate believer, offer to contribute to their happiness by giving them support and encouragement as they fight to maintain their chastity.

How noble.

The church, of course, has two thousand years of experience disguising hate as love. But this one is particularly devious and malign. Let’s cut to the chase: the only reason that one adult human being would try to stop another adult human being, on a lifelong basis, from attaining romantic and/or erotic satisfaction is that he or she (human one) hates him or her (human two). No amount of theological stroking can change that. It’s hate.

Not news, of course, that the Pope and friends hate gays. But interesting to see how slimy and prurient they can get, in the process. Anyway, let’s move on.

Actually we can borrow a concept from the church: “invincible ignorance.” When I read the stuff about homosexuality being a choice (note that it’s not that sexual preference is a choice, just homosexuality—which makes it kind of weird to describe it as a choice), my reaction is that if you put twenty articulate, knowledgeable people in a room for twenty years with the person who’s taking the “choice” position, that person would emerge still saying that homosexuality is a choice. There’s no point of entry for explanation, and no point of contact with reality.

It’s pathetic, but I still count it as hate. At least it leads to hate. Or from hate, perhaps. Or maybe these people are actually choosing to be vicious, and could stop themselves if they really wanted to. It’s hard to know. They’re not saying.

With gay marriage on the news radar these days, more and more of this kind of discourse is showing up: the choice thing, but also the “gays recruit people” thing (which is actually backwards; have these people ever watched television commercials?) and, most disturbingly of all, the “gays prey on children” thing. And each of these things embodies two problems: first, that people believe it; and second, that it’s acceptable to say it publicly.

Which hateful statements are acceptable and which aren’t is a kind of lump under the carpet that moves around but never goes away. Unfortunately, the underlying hate never goes away either—and ultimately, no matter which targeted people or groups we’re talking about, it’s the underlying hate that matters. But who gets to say what, and when, and with what consequences (or lack thereof) is, in itself, something that I think it’s worth keeping fairly close tabs on.

I’m encouraged by a couple of recent conversations to go public with this possible wacky idea. It has to do with code and testing.

I’ll start with the idea, and then say something about why I’m thinking along these lines.

The idea is for a programming system designed in such a way that the code and its tests are physically together, in one file. Furthermore, that file is not executable. You have to run it through a dedicated filter utility to generate the actual code file(s) from it.

So it’s a bit like, and indeed inspired by, Knuth’s Literate Programming, where the code and its documentation are fused together in a single file which contains both but is, itself, neither. You can’t execute that file; you have to generate the real code files from it.

Adapting the master-file idea to testing, as I envision it, would also entail the following constraint: that the system would refuse to generate the code files unless the code involved already had tests, and those tests passed. In other words, the whole system would militate against using untested code in production, by physically obstructing the creation of executable code files for untested stretches of code.

It seems to me that this would make for a much more sensible and efficient flow of energy than what we’ve got now. What we’ve got now are separate files, and therefore the possibility of running untested code. As long as that possibility exists, people will run untested code. Reordering things so that the creation of the executable code comes after the successful test run would, potentially, realign the energy of the whole process in a very productive way.

As things stand now, the energy is flowing in a wrong and wasteful way. The evidence for this is sociological, at least as much as it is technical. Thorough testing involves keeping the code and the tests in contact with each other through willpower and force, like holding like ends of two magnets together. Therefore, people who test consistently end up with bragging rights, which they often exercise. I hasten to add that I’m not talking about the really accomplished, masterful engineers of the great testing frameworks we’ve got available to us. Those people are above bragging. But there’s a sub-population that isn’t.

I’m really tired of seeing the test police needling people about not having written tests. It’s not that people shouldn’t write tests. Like I said, it’s about the energy flowing the wrong way. The whole culture of test machismo is, start to finish, a waste of energy and, above all, doesn’t work. You can’t get the whole world to write tests by trying to shame people into it, one person at a time. As long as the technical conditions allow for untested code, untested code there will be.

So we’ve got untested code, alongside a culture of testier-than-thou assertiveness. Neither is good.

And then there’s the programming should be fun thing. Programming should be fun. Testing should be a big part of programming. Therefore, testing should be fun. However, it’s acquired a sort of “do it because it’s good for you” aura, like using a treadmill or eating your vegetables. Again, this take on testing is wasteful and irrelevant—but it arises directly from the physical possibility of running untested code, and will not go away as long as that possibility exists.

I’ve made some very sketchy, preliminary attempts to see what a Probative Programming file might look like, for a Ruby program. It’s a daunting task, and one I may or may not ever succeed at. But I’m convinced that something along these lines is both possible and desirable.

Finally, if there are existing systems that do what I’m describing, or anything substantially similar to it, I’d be interested in hearing about them.

RESTful Rails for the restless

November 24th, 2008

QuickStarts-R-Us

As one of the most active Rails trainers on the circuit, I come up a lot against the challenge of introducing RESTful Rails to relative newcomers. It’s a challenge because the REST support in Rails is very high-level and, even for the diligent, basically impossible to understand deeply without a knowledge of the subsystems—in particular, the routing system—on which it is built.

I believe it’s possible, nonetheless, to understand up front how the RESTful support in Rails fits into the subsystems that support it; and I believe that it’s beneficial to gain such an understanding. My purpose is thus to provide a “QuickStart” introduction, not to the practice of writing RESTful Rails applications but to the way the REST support in Rails fits into what’s around and beneath it. If you want to do RESTful Rails but either find it too magical or don’t quite understand how it relates to the framework overall (does it add? supersede? enhance?), then this article may be of interest to you.

You may wonder why I’m not making use of the Rails scaffolding. That is, as they say, “a whole nother” story. Short answer: the scaffolding gives you a quick start, but also a quick end. It explains nothing and leaves you with a lot of work to do to reverse the ill effects of having a lot of “one-size-fits-none” code lying around your application directory.

So no scaffolding. Also, no REST theory—but by all means have a look at the theory once you get into the practice. It’s just not my focus here.

In what follows, I’ve tried to be concise—minimalist, almost. I’d advise not skimming over anything, even if you think you already know it. I’m chosing the path carefully. If you don’t trust me as a guide, that’s another matter entirely :-) If you do, welcome.

What a (non-RESTful) Rails application does

The job of a Rails application is to provide responses to requests. Responses are generated by controller actions, which are (in Ruby terms) instance methods of controller classes.

When your application receives a request, the first order of business is to figure out which action to execute. The subsystem that does this is the routing system. It’s the routing system’s job, for every request, to determine two things:

1. controller
2. action

If it cannot determine those two things, it has failed, and you get a routing error. If it can, the routing has succeeded. End of story. (You might get a “No such action” error, but that’s not the routing system’s problem. The routing system has done its job if it comes up with an action, whether the action exists or not.)

The main information that the routing system uses to determine which controller and action you want for a given request is the request URL. By definition, every URL that’s meaningful to your application can be resolved to a controller/action pair. If the URL contains information beyond that which is needed to determine a controller and action, that information gets stored in the params hash, to which the controller action has access. (That’s how you get params[:id], for example.)

The routing system uses a rule-based approach to resolving URLs into controller/action pairs. The rules are stored in routes.rb. A rule might say, for example (paraphrased here in English), “A URL with (1) a string, (2) a slash, (3) a string, (4) a slash, and (5) an integer means: execute action (3) of controller (1) with params[:id] set to (5)” (and indeed the default routing rule says exactly that). Rules can be specific, to the point of silliness. It’s perfectly possible to program the routing system so that “/blah” means: “the show action of the students controller with params[:id] set to 1010.” There’s almost certainly no point in such a mapping, but the point is that you can program the routing system in a fine-grained way.

In the non-RESTful case, the URL is all that the routing system needs to do its job of performing a rule-based determination of a controller and an action.

In the RESTful case, it isn’t.

Enter the verbs

This is the crux of RESTful routing in Rails. Everything else flows directly from this, so make sure you understand it.

Instead of routing based solely on rule-driven mapping of each URL to a controller/action pair, RESTful Rails adds another decision gate to the chain: the HTTP request method of the incoming request. That method will be one of GET, POST, PUT, or DELETE. It’s the combined information—URL plus request method—that the RESTful routing uses to determine the controller and the action.

That means that for every incoming request, the correct controller/action pair is determined not per URL, but per URL per request method. That, in turn, means that a given URL, such as this:

  http://blah.blah/houses/14

might map to two or more different controller/action pairs. It all depends on the HTTP request method.

In theory, any one URL can be routed to as many as four controller/action pairs, because any one URL can be used in a GET, PUT, POST, or DELETE request. In practice there aren’t that many permutations, because some combinations of request method and URL semantics are not meaningful. But the principle is what matters: a single URL no longer has an unambiguous meaning, but must be interpreted in conjunction with the request method.

Furthermore, these conjoined interpretations are hard-coded to a pre-determined set of seven actions: index, show, delete, edit, update, new, and create. (You can add custom ones, but those are the canonical ones.) For example, the “houses” URL above, if requested as a GET, automatically routes to the show action of the houses controller, with params[:id] set to 14. If submitted with a PUT, it goes to the update action. A URL with no id field (/houses) goes either to index or to create, depending on the request method. And so forth.

That, as I say, is the crux of the matter: routing based on URL plus request method. Keep this in mind as you get into the details and bells and whistles of RESTful Rails.

Interpreting requests, though, is only half of the job of the routing system. The other half is the generation of strings.

RESTful URL generation

When you write this in your view:

  <%= link_to "Click here for help", :controller => "users", :action => "help" %>

your view ends up containing this:

  <a href="/users/help">Click here for help</a>

It’s the routing system that does the job of processing the link_to arguments and figuring out what the URL (or, in this case, the relative path) in your tag should consist of.

The same thing happens with RESTful routing, except that you never have to spell out the controller and action. Instead, you call yet more helper methods. Compare this:

  <%= link_to "User profile for #{user.name}",
               :controller => "users", 
               :action => "show",
               :id => user.id %>

with this:

  <%= link_to "User profile for #{user.name}", user_path(user) %>

You don’t have to define the method user_path. It comes into being automatically, when you write:

  map.resources :users

in routes.rb. And it has a simple job: return the right string, in this case the string ”/users/14” (assuming that user.id is 14).

For every resource you route, you get a fistful of such methods: user_path(user), users_path, new_user_path, and edit_user_path (plus all of these with _url instead of _path). These methods do nothing but generate strings. They have no knowledge of request methods or REST. In fact, they’re just examples of named routes—methods that generate the right strings for specific routing rules—and you can use named routes in routes.rb even without REST. The only REST-related special treatment is that map.resources automatically writes a bunch of these methods for you. You can think of map.resources as, primarily, a macro that writes named route methods, much as attr_accessor automatically writes getter and setter methods.

The specifics of what the various RESTful named route methods do is for future study. The point here is to see the roadmap. You do map.resources :users, and from that point on, you can use methods in your views to create URL strings, rather than having to spoonfeed the information about which controller, action, and id are involved.

But that still leaves the question of the request method. How does ”/users/14” know which action to trigger when clicked?

Specifying request methods

When you write view code that generates path strings (with link_to, form_for, link_to_remote, etc.), you want the right string, obviously, but you also need the link, when clicked, to use a particular HTTP request method for the request. Otherwise the RESTful routing system won’t have enough information to make sense of the URL.

The helper methods that generate hyperlinks all have sensible HTTP request method defaults (which you can override if needed). link_to generates a link that will submit a GET request. form_for generates a POST form tag (method=“post”), unless you tell it to use PUT (which is conventional for update operations, as opposed to new record creation operations), and so forth.

Again, the named route methods don’t have request method intelligence. The enclosing hyperlink-writing methods (link_to and friends) do. They just used the named route methods as lower-level helpers for the specific purpose of generating the right strings.

Invisible ink

One of the challenges of using RESTful routing in Rails is that you end up with not very much information available to you visually. When you write a RESTful form in your view, let’s say for an update:

  <% form_for :house, :url => house_path(@house.id),
                     :html => { :method => :put } do |f| %>
  <% end %>

you never see the word “update” in routes.rb, nor in the URL, nor in the view templates, nor in the HTML source of your rendered views. You just have to know that a thing_path-style named route, coupled with a request method override to PUT (override of the default POST for form_for, that is), will result in a form that, when submitted, will send a PUT request to the update action of the houses controller. And you have to trust that the routing system will succeed in so routing it.

RESTful routing pushes most of the routing intelligence—which, as you now know, means the determination of a controller/action pair from an incoming request—under the surface. You have to learn how the REST-ified routing system thinks. The early phases of learning RESTful routing tend to involve memorizing the combinations of named routes and request methods, and which action they point to. The good news is that there’s a finite number of them, and they make sense. If it seems like routing soup, hang in there and look closely at the logic. It will come clear.

The rest…

That’s the basics. There’s a lot more to it, including (but not limited to) more “magic” shortcuts. But if you get the basic ideas you’ll be in good shape.

  • The basic routing system resolves a URL to a controller/action pair.
  • RESTful routing resolves a URL/request-method combination to a controller/action pair.
  • map.resources :things generates a bunch of named routes (things_path, etc.) for you automatically.
  • You don’t see as much visual evidence of the routing logic with RESTful routing as with non-RESTful routing, so you have to learn exactly what it’s thinking, especially the seven hard-coded action names.

Now go forth and REST. Oh, one more thing. Here’s a chart I once made, showing how the named routes map through the request methods to the seven canonical actions. The chart uses the _url methods (which give you the whole thing, including http://), but the _path versions would exist too.

RESTful routing chart

The bailout bill has just passed. I know very little about economics, little enough that I don’t feel entitled to a strong opinion one way or the other on whether the bill should have passed. But I am suspicious of it.

I’m suspicious of it, for one thing, because of the fear-mongering that has surrounded it; it’s very reminiscent of the ongoing “Terrorists will come and kill your family if the executive branch doesn’t get a blank check for waging undeclared war” campaign, and things in that vein.

But I’m even more suspicious of the bill because of all the rhetoric about how it will help “Main Street” as well as “Wall Street”. I don’t know whether it will or not, but what troubles me is the fact that this kind of rhetoric makes it sound like Congress and the Bush administration are desperate to help Main Street. The fact is that, in general, they’re not.

Every microsecond of every day in the history of this country there have been uncountable opportunities for the government to help citizens with financial problems, difficulty paying for a home, lack of job opportunities, inability to get credit, and all the rest of it. The thrust of the behavior of the government for most of the history of the country has been not to bother helping such people to any significant degree.

Now, all of a sudden, helping Main Street leaps to the front of the congressional and executive agenda. I’m disinclined to buy it. If the common weal were really a government priority, we would have known by now. I find it immensely suspicious that the greatest outpouring of social concern, at least as measured in money, comes tethered to a Wall Street bailout.

If Main Street is going to benefit from the delivery of a de facto blank check to Wall Street, surely it would not benefit any less from having money delivered to it directly. But you don’t hear any talk of, say, the government purchasing houses for the victims of fiscal mismanagement. I suppose it would have taken too long to draft a bill that did that; and as we know, the earth would have left its axis if the bill had not been passed this week….

Tracks a-go-go at RubyConf 2008!

September 13th, 2008

Ruby Central is gearing up for RubyConf 2008, which has a fantastic program and which you can still register for (at time of writing, anyway!).

People have noticed, naturally, that we’ve gone over entirely to a multi-track format (except for keynotes and a couple of other special slots). And they’re surprised; we used to be one-track, and then last year we were multi-track but with a good dose of plenary sessions.

So I thought I’d say something about the multi-trackedness of RubyConf 2008, for anyone who’s interested.

The bottom line is that we’ve scheduled multiple tracks because we got so many really, really good proposals. Of course we can’t accept all of them; we can’t be that multi-track. There will always be a cutoff, and where the cutoff comes always involve a judgment call. This time around the judgment was that the number of talks we’d have to exclude, in order to dilute the multi-trackedness significantly, was too great.

In fact, we started drafting a schedule without explicitly discussing the multi-track issue; it mostly emerged from what we jotted down, and then it continued to make sense to us as we started analyzing the track issue more closely.

People have asked whether it’s about the size of the event. It is, in a couple of ways—subtle ways, perhaps, but important.

For one thing, we know that not every speaker is comfortable getting up in front of 500 people. Lots are, but it’s still a lot to ask. Breakout sessions make for situations in which more speakers are likely to be comfortable.

Of course, if there are only fifteen speakers, we could easily find people who don’t mind a big audience. But what about that “only fifteen speakers” thing?

In a conference with 400-500 people present, it’s definitely more fun if, say, twelve percent of the people prowling the halls and sitting next to you at lunch are speakers, instead of two or three percent. Having fifteen speakers at an event with over 400 people isn’t the same, for anyone, as having fifteen speakers at an event with sixty people. If the ratio is too lop-sided, it gets too much into the “us and them” thing. We’ve never been into that.

Another reason we’re OK with moving toward a multi-track format is the proliferation and success of the Ruby regional conferences, many of which are one-track. Everyone should attend, at some point, a one-track conference. It’s really cool the way everyone at such a conference shares the same experience. My first conference was a one-track academic film conference in 1985, and it was great. And the wonderful flowering of the Ruby regional conference culture means that, even if it isn’t at RubyConf, many Rubyists will get a chance to have that experience.

We started our regional conference grant program in 2006 in the hope that “regional” wasn’t going to mean “provincial”—that regional conferences could be top-notch events—and that hope has been fulfilled beyond what we could possibly have wished for. (And certainly way beyond what we can take credit for. The regional organizers have been amazing!) These high-quality small events can address many needs and desires, including the desire for the experience of a one-track format.

In sum, the RubyConf format for 2008 is a format for its time, its year, its configuration of the Ruby world. We’re nothing but excited about it and hope you’ll come and share the fun!

Back from RailsConf Europe 2008

September 6th, 2008

I got home yesterday from RailsConf Europe 2008 in Berlin, and am very happy to say that the event was a major success.

It was particularly gratifying to hear from many attendees that they found the program content more advanced and more instructive than last year. It’s always hard to fine-tune the level of talks across a big program like this, and I’m really glad to have evidence that people overall felt it had gone in the right direction.

Highlights included keynote addresses by David Heinemeier Hansson and Jeremy Kemper, as well as a Rails core team panel discussion with David, Jeremy, and Michael Koziarski. DHH led us through some very interesting thoughts on the notion of “legacy” code, and how that concept plays out with respect to one’s own development and growth as a programmer. Jeremy talked about performance, and masterfully expanded the horizon beyond the shop-worn “Does Rails scale?” stuff to some very specific and powerful techniques for evaluating and adjusting performance.

We also held a “Symposimi” (the name is based on a misspelling in the program; it should have been “Symposium” but came out “Symposimi,” and I decided that sounded really cool!) on the subject of Ruby versions and implementations—who’s using what, what’s targeting what, the pros and cons of moving to 1.8.7 and/or 1.9. A symposimi is a town-meeting-like gathering of people who want to ask and answer questions about a topic. It’s more audience-based than a symposium, and less hierarchical.

The symposimi was fun for me because I got to do some live code demos, which I usually don’t at the conferences I’m an organizer of!

Lots of people asked about next year. We don’t know yet where RailsConf Europe will be in 2009. Probably not Berlin, just because we’d like to move it around. If you have suggestions (and a rationale other than that you happen to live there :-) by all means let me know.

Now that RCE2008 is over, I’m looking forward to RubyConf. Stay tuned for announcements of the program and registration!

I know it’s pointless—I’m not going to make a dent in it—but I feel moved to say something about the biggest problem in online discourse: pseudo-persuasion.

The term is a bit awkward, but you’ll recognize what I’m talking about because it monopolizes an almost literally incredible proportion of email lists, news groups, blog comments, and IRC chats, and you’ve seen plenty of it. I’m talking about the endless stream of this vs. that. Emacs vs. vi, Ruby vs. Python, Ubuntu vs. Redhat, Mac vs. PC, tabs vs. spaces, and all the monumentally huge and boring rest of it.

Yes, there are interesting comparative points you can make about all of these pairings. Yes, some people make interesting points. I’m not talking about those points. I’m talking about the other 99.99% of online comparative talk, the inexhaustible store of “mine is better than yours” drivel, the vacuous chatter that, despite its vacuity, manages to choke and clog the online world as if it were of substance.

I call it pseudo-persuasion because it sounds like persuasive speech, but isn’t. It is persuasive neither in effect, nor in intent. Millions upon millions of words pour forth—arguments in favor of A and against B, checklists of assertions and accusations, praise of features and denouncement of shortcomings—all delivered in the most fervent persuasive language but not one syllable actually persuading anyone of anything, and not one syllable written in the expectation of persuading anyone of anything.

Have you ever said to yourself, “Gee, someone on IRC said that Emacs keybindings aren’t intuitive, so starting tomorrow I’ll switch to vi”? Have you ever met anyone who, after asking a question about a Linux problem and receiving an answer consisting of the single utterance, “OS X!!”, proceeded to run out and buy a Mac? Did you start using your current favorite programming language because someone told you, in so many words, that the one you had been using sucked and this one was better?

My late father used to say that “No one ever convinces anyone of anything.” He didn’t believe it literally, or he would not have bothered co-authoring the brief in Brown v. Board of Education. In general, he didn’t mean it with regard to legal and forensic argumentation. He did mean it, however, with regard to cocktail party chatter, exchanges among politically widely-separated colleagues, heated classroom arguments among students, and the like: day-to-day exchanges where the urge to state an opinion does not imply an inclination to take someone else’s opinion seriously.

Non-persuasive persuasion can serve a purpose. It’s good, for example, for students to put their thoughts into words, even though they’re not really listening to each other. Usually, though, it’s just a way to fill otherwise awkward social time.

When people yap at each other about Emacs and vi, however, it’s not filling awkward social time. To be honest, I don’t know what it’s doing. It certainly is not debate. It sounds like debate, and it uses rhetorical devices that are also found in debate. But it is not debate. No one can “win”, no one is listening to anyone else, and the likelihood of persuasion being achieved approaches zero. Nothing is at stake, and no one actually expects any conclusion, outcome, or productivity to emerge from the exchange.

But my case against pseudo-persuasion is not that the practitioners don’t take each other seriously enough. They hardly could, given how much of this crap there is. My case against it is that it’s a staggering waste of time, mental energy, and passion. Can you imagine what would have happened if, over the past couple of decades, participants in online forums had taken, say, five percent of the time they’ve spent pissing at each other, and used it instead to collaborate on software or technical writings?

My friend and nearly-neighbor Erik Kastner is going to be joining me to teach the Ruby Power and Light course “Advancing With Rails” in Edison, New Jersey, August 18-21. This will be RPL’s first co-taught course, and I’m really looking forward to it.

See the calendar at Ruby Power and Light for more info!

During the week of July 6-12, I invite and encourage everybody who includes links in their email, blog posts, online chats, and other documents, to link to something other than Wikipedia.

I’m not trying to be a Wikipedia slayer. It wouldn’t matter if I were; that’s not going to happen.

I just want to remind everyone that there are thousands and thousands of interesting, well-informed, thought-provoking, educational websites out there, written by professors, researchers, doctors, artists, scientists, practitioners of every craft and industry—and however you slice it, these websites are getting a raw deal when it comes to links.

It’s not about whether Wikipedia articles are accurate or not. Some are, some aren’t. But that’s true of the whole Web. Let’s stop acting as if Wikipedia has some special status.

The best thing about the Web is that it isn’t an encyclopedia. And Wikipedia is evidence that when Web culture meets encyclopedia culture, encyclopedia culture wins. Sure, Wikipedia is collaborative. Most encyclopedias are. They still give off an aura of total, centralized, complete knowledge and authority. And that’s not very Web-like, is it?

So:

  • If you’ve got a point to make about grammar, look for an English (or whatever language it is!) professor’s site. There are some great ones. Point the person you’re arguing with to a couple of those.
  • Countries have their own informational websites, some official and some written by people who live there. Many of them are multi-lingual. Are they “balanced”? Probably not, at least not in the network news way. So much the better! Balance on the Web emerges from the quantity and interplay of sites. It’s not supposed to be embodied in every document. How boring!
  • Wikipedia is great for technology-related topics. But so are lots of other sites. Are you sure that Wikipedia’s description of the algorithm you’re discussing on that mailing list is really the best? the clearest? the most engaging?
  • You get the idea! Strike a blow for the richness of the Web, and for the beauty of discourse that doesn’t try to be poker-faced and non-committal, even about important issues. Rediscover the expertise of the many Web contributors who write about their own specialties and have taken the time to share their thoughts.

There’s a lot to learn at Wikipedia, but it’s time to spread the linkage!

A guy I was chatting with in the men’s lounge of the spa at Harrah’s in Atlantic City was telling me about “slide words.” I can’t find anything about them (and I’ve tried “slider words” and a few other variants) anywhere. I don’t think he made the term up, and he certainly didn’t think he had.

Anyway, even though I can’t find any background information or previous discussion, I am going to talk about “slide words” (or whatever they’re called).

A slide word, I gather, is a word or phrase that has come to serve as shorthand for an entire argument—except that the argument isn’t really there. We’re all just supposed to think it is. The slide word acts as a black hole, drawing further discussion and thoughtful debate into itself and killing it.

Slide words are bad because they take the place of actual analysis of situations and events. Every slide word has a kind of implicit, “Sigh. Here we go again” attached to it, even though the “again” part is asserted through the use of the slide word itself and not actually demonstrated.

I have something to say here about three slide words: conspiracy theory, Chinese menu, and bikeshed.

“Conspiracy theory”

“Conspiracy theory” is perhaps the best example of a slide word. Consider the following exchange, which is made up but is actually very similar to several I have had:

Me: Apparently there might have been an eighth Challenger victim. A Brazilian fisherman said that his son was struck and killed by falling debris, while they were out on a boat.

Other Person: Why haven’t we heard about it?

Me: It was in the news briefly. I guess it was considered more prudent to downplay it.

Other Person: That sounds like a conspiracy theory.

With the invocation of the term “conspiracy theory,” all further discussion of what might have actually happened is discredited. The events surrounding the death of John Kipalani’s son need not be examined in any detail; nor need the press coverage (or lack thereof). “Conspiracy theory” plays the role of a rebuttal of the statements about the Challenger disaster, even though it has no actual connection to them.

Here’s another example:

Me: The only people who profited from 9/11 in any way, financially or politically, were George W. Bush and his family and friends. I therefore assume, as a matter of the simplest logic, that Bush had something to do with it.

Other Person: What are you, a conspiracy theorist?

Again, the slide word (or slide phrase) gets played as if it were a trump card, when in fact it has nothing whatsoever to do with the question of Bush’s culpability in the 9/11 attacks, and neither refutes the logic that’s on offer nor adds information that might bring about a reconsideration of that logic.

“Chinese menu”

Another slide word I’ve come across, in a somewhat narrower setting, is “Chinese menu.”

When I was teaching at a university, I was involved in lots of discussions, formal and otherwise, about core curricula: what they should include, how they should be administered, and so on. I remember that in one series of such discussions, any time anyone suggested anything along the lines of having students choose one or more courses from each of several course groupings, someone else would say, “That’s like a Chinese menu.” Eventually it became just “Chinese menu.”

I have no memory of any discussion of why it was considered a bad idea to adminster a core curriculum this way. All that was required to rebut the idea was “Chinese menu.” Actual argumentation did not enter into it.

“Bikeshed”

Another slide word, a rather obnoxious one that seems to be enjoying considerable popularity these days, is “bikeshed.” If someone says “bikeshed,” they’ve said all they need to say (or at least all they think they need to say, and certainly all they’re planning to say) to establish that what you have been talking about is trivial and not worth discussing.

Saying “bikeshed” to someone, instead of telling that person outright that you find his or her statements trivial and worthless, is not only needlessly indirect but, in most cases I’ve seen, wrong.

The original bikeshed concept, as I understand it (which is from second-hand accounts, so I could be wrong), had to do with the phenomenon of committees spending more time arguing over what color to paint the company bikeshed, than over the allocation of funds to build a nuclear power plant.

The problem with the typical usage of “bikeshed” today is that there’s no nuclear power plant in the picture. It’s more likely to be a bunch of people on an email list discussing the best name for a proposed new method in Ruby, or something like that. Then someone who feels superior to the discussion (which would exclude the creator of Ruby, as well as many of his colleagues, associates, and friends) comes along and says “Bikeshed.”

But if we weren’t talking about method names, we’d be talking about literal constructors for runtime objects. And if not that, then perhaps the question of whether parentheses around parameter lists in method definitions should be mandatory. All of these things are important to people interested in the Ruby programming language; but, with respect, I will state unequivocally that none of them is as important an issue as nuclear power.

Furthermore, saying “bikeshed” implies that you think the group you’re addressing not only is wasting its time on the current topic, but has a history of spending too little time on important things. Even scaling it down so that the important things aren’t really important things in the nuclear power sense, no one ever says what those things are. That’s probably because “bikeshed” is just a snide way to say, “What you’re saying is stupid,” and not a unit of cogent or well-sustained argumentation of any kind.

Thus slide words. I’m glad there’s a name for them, even though it’s puzzling that the only person who seems to have heard the name is that guy at Harrah’s.

Death of a racehorse

May 4th, 2008

I’ve always vaguely disliked horse races. The anthropomorphizing of the horses, the claims that they know that they’re involved in a race and that they share the goals of their owners, is manifestly silly and self-serving. And the whipping always bothered me. I suppose I made myself believe that horses didn’t really care and that an attack with a whip was, to them, kind of like a verbal exhortation to us. (Not that verbal exhortations can’t be painful, but they’re not physical).

The death of Eight Belles shocked me out of my indifferent, complacent position.

All the crap in the news about how noble she was, how competitive her spirit, how great her self-sacrifice… it’s all smug and disgusting beyond belief, despite the accompanying descriptions of the tears glistening in the eyes of the various stakeholders. What really happened was that this horse was forced to run as fast as she could, for reasons she could not understand and that had nothing to do with her well-being, and as a direct result, her legs fell apart, and then someone killed her.

That’s it; that’s all there is to it.

Why is this allowed to go on? Is it simply because more horses survive races than don’t?

For some reason, we continue to give the benefit of the doubt to this bizarre, nasty, money-drenched “sport”. Except that for me, at this point, there is no doubt, and no further conferral of the benefit.

In part 1 of this two-part post, I explained my concern that the word “resource” has become too closely associated in Rails-related usage with some combination of model, database table, and controller/model stack—none of which do justice, as definitions or even first approximations, to the concept of a REST resource as originally described by Roy Fielding. Here, I’m going to expand on this observation by exploring a few ramifications of the same topic.

Resources, controllers, and models (or lack thereof)

As I explained in the previous post, the concept of “resource” has no database implications—indeed, no implementation implications. A resource does not have to have a corresponding model. It also does not have to have a corresponding controller. Resources are far more high-level than controllers and models. Controllers and models are tools with which you provide access to representations of resources.

However, if you want to draw a line between resources and Rails, by far the better line to draw is the one that points to controllers rather than models. A controller is not a resource, but it comes closer than anything else in your application to taking on the features of your resources. Models are another big step away.

If controllers are closest to resources, how does this play out? One way is in the creation of resources for which requests are handled by a controller that has no corresponding model.

My favorite example of a likely modelless resource is the shopping cart. In Ruby for Rails, I use a shopping cart in my central example. When I started working on this application, I tried to model it directly; I imagined I would have a ShoppingCart class, a shopping_carts table, and so forth.

I quickly realized, however, that I didn’t need that. What I was calling a “shopping cart” was really a virtual construct or, in Rails terms, a view. I had Order objects and Customer objects, and the shopping cart was basically a screen showing all of a particular customer’s open orders. Calling it a “shopping cart” was just a kind of semantic sugar. There was no need to persist it separately from the persistence of the orders and the customer.

If I were writing the same application today using RESTful idioms, I would in all likelihood do:

map.resources :customers do |c|
  c.resource :shopping_cart
end

or words to that effect. I would then have a shopping_carts controller, with a show action (probably leaving all the related CRUD stuff back in the orders controller, though there might be several ways to approach that part of it). And I would, without hesitation, describe the shopping cart as a resource—even though it has no ShoppingCart model behind it. From the perspective of the consumers of my resources, it doesn’t matter whether there’s a ShoppingCart model (and shopping_carts database table) or not. I can decide on the best application design, and use RESTful Rails techniques to support my design decisions appropriately.

A resource is not a model, and it’s also not a controller. Identifying the resource with the controller is, however, somewhat closer to the mark. The controller layer conforms most closely to the resource mapping, which makes sense since the controller is the port of call when someone connects to your application.

Another area where misunderstandings arise in the course of designing RESTful services in Rails is in the matter of how identifiers (URI) map to resources—and not just how, but how many.

Identifiers and resources: not always one-to-one

I’ve seen people tie themselves in knots trying to come up with the best way to label and/or nest resources. One of the principles that’s gotten lost in the mix is that the ratio between resources and identifiers does not have to be one-to-one. Fielding states:

[A] resource can have many identifiers. In other words, there may exist two or more different URI that have equivalent semantics when used to access a server. It is also possible to have two URI that result in the same mechanism being used upon access to the server, and yet those URI identify two different resources because they don’t mean the same thing.

Therefore, it’s possible that this:

http://dabsite.com

and this:

http://dabsite.com/welcome

can identify the same resource, which would probably be described as something like “The welcome and orientation information at dabsite.com”. The reason they’re the same resource is not that they generate the same HTML. Rather, they’re the same resource because they’re published as the same resource.

It’s also possible that this:

http://dabsite.com/orders/211   # 211th order in the system

and this:

http://dabsite.com/orders/042208-003  # third order placed on 4/22/08

identify different resources, even if the third order placed on 4/22/08 happens to be the 211th order in the system. That’s because resources are not database rows. In this case, the two requests might generate the same HTML, but still pertain to different resources.

You don’t have to make a point of having a non-one-to-one ratio between your resources and your identifiers. Just be aware that if such a ratio emerges, in either direction, you’re not doing anything inherently “unRESTful.”

CRUD and REST and resources

One of the nice things about the REST support in Rails is that it dovetails with CRUD-based thinking about modeling. I add in haste: REST is not CRUD, and CRUD is not REST. (That’s no secret, but I want to go on record with it.) But in Rails, there’s a nice relationship between them.

The REST support in Rails emphasizes the convention of CRUD operations. map.resources gives you a fistful of named routes that have built-in knowledge of CRUD action names. The emphasis on CRUD at this level encourages you to think of modeling for CRUD. Instead of having, say, a users controller with a borrow_book action, you can have a loans controller with a create action. In many cases, this way of thinking might also wag the dog of your domain modeling. Thinking about CRUD in the controller might, for example, lead you to conclude that you should have a Loan model.

It’s perfectly fine—indeed, in my view, it’s very productive—to think along these lines, to bring your modeling and your REST-friendly CRUD operations into harmony, as long as you understand that none of this is actually about resources as such. Rather, it’s about the Rails flavor of implementing the handlers that underpin the creation of resource representations.

Does that sound like just a lot of extra words? It isn’t. It’s a lot of words, but they’re not extra. Again, it’s important not to squeeze the entire framework into the “resource” label. Let a resource be a resource, and let the handler layers be handler layers. They’re nicely engineered—but they’re not resources.

And then there’s the word “representation,” which crept into my “extra words” sentence but which is the least extra of all of them.

Representations: the one that got away

The representation is, in my view, the one that got away: the central concept in REST that no one in the Rails world ever seems to talk about. We need to, though. It’s vitally important.

Your server does not traffic in resources. It traffics in representations of resources. Users of your application do not receive resources. They receive representations. The distinction is big; at stake is the entire meaning, and meaningfulness, of the notion of a resource.

We need the concept of “representation” because it’s the part of REST theory that relieves the pressure on the term “resource.” After all, how can a resource be a “conceptual mapping” (Fielding) and a sequence of bytes that a server sends you and a controller-model stack…? It can’t, and it’s only the first of these things. The second, the response itself, delivers a representation of a resource.

One resource can have many representations. There’s no big news here; we all know that a server can give us a text version of Jane Eyre or a movie version or an audio version. (I’ll refrain from getting philosophical about whether or not a book and a movie are “the same” in any deep sense. They’re the same enough, in this context.) The point is that we don’t need to mush everything into the term “resource.” Rather, we benefit by yanking that term up to the high level where it belongs, and applying the term “representation” to the actual response we’re getting.

Fielding has much more on representations in his dissertation, and I’m not going to try to paraphrase it here. My point is to encourage the liberal use of the term in Rails discourse about REST. The poor term “resource” has already been given too much to do. We need to delegate some of the domain description to the other terms that apply to it.

Now what?

The use of the term “resource” to mean things that, I’ve argued here, it really doesn’t mean is rather deeply entrenched, and widespread, in Rails discourse. I don’t have any quick fix for this. I do have a few recommendations, though.

First, read Roy Fielding’s dissertation. You can skip to chapters 5 and 6 and get a great deal out of them.

Second, pay particular attention to the concept of the representation. I don’t think we can get much further in exploring REST and Rails unless the representation makes a comeback. “Resource” is just plain spread too thin in the way it’s used in and around Rails, and there’s no reason why it has to be, if we look at the theory as a whole.

Third, and last, don’t assume that any deviation from the out-of-the-box behaviors in your RESTful Rails applications is unRESTful. The defaults are in place because they’re high percentage. But they’re just as opinionated as the rest of Rails, and in some respects more so. That’s OK, but do understand that they’re REST-friendly tools. They’re not a definitive statement on the entirety of what REST is.

REST is not an easy topic, and it’s unlikely that anyone is going to create a way for you to create and maintain RESTful applications, over time, without you trying to get a handle on it and developing your own understanding of resources, representations, requests, and responses. I hope these posts will help you out in that endeavor.

References

Fielding, Roy Thomas. Architectural Styles and the Design of Network-based Software Architectures.. Doctoral dissertation, University of California, Irvine, 2000.