I was raised on a fairly traditional diet of imperative programming. I wrote my first programs in QBASIC with a couple of friends on an old PC at school, around 1995. Then the WWW came along, and I started working with early versions of JavaScript. Then, thanks to the .com boom, I landed a work experience placement as a Visual Basic programmer, around 2000. Then I did some work in C++, because, at the time, it seemed to be the pinnacle of both programming power and programmer prowess. But, computers had crossed a kind of magical threshold, by then: they were so fast and so cheap and had so much memory that even grossly inefficient code was fine for most of the things that users wanted to do. So, I picked up C#. By this time I was studying computer science at university, where we used Java, C and C++.
It wasn't until my third year of university that I had even a vague inkling that there were other ways to tell a computer what to do: I took a course in Lisp and Prolog. I must admit that I didn't really get it, at the time. I studied hard for the exam, and did fine in the course, but I figured it was just sophistry. After all, if it was any good, why were so few people using it?
But, that course on functional programming had some subtle side effects. Afterward, having gone back to working in Java or C++, I frequently found myself staring at an ugly mess of for loops and iterator objects, and thinking, "this would be much better if I could just use a little lambda function and call mapcar." The seed of discontent had been planted, and it was beginning to germinate: I finally started to complain about the programming paradigm that I'd grown up with.
I have not yet decided whether the grass is indeed greener in the functional programming pasture, but I am starting to learn more about it. And, from what I've read, it seems that there are lots of other people at this point, as well. So, here are some interim observations.
I think much of the revived interest in functional languages is due to the better-than-average mainstream penetration of Ruby (in particular, Ruby on Rails). I like Ruby. It provides a pleasing mixture of functional and imperative styles, and it has a familiar syntax and type system. The main reason I haven't used a lot of Ruby for my code at work is its lackluster run-time performance: simulators are programs of the "the faster, the better" variety (provided they are correct, of course). I also wasn't very happy with the available development tools: having smart identifier completion and in-line documentation (e.g. Eclipse) improves my design-time performance dramatically, but it's hard to do it well for dynamically typed languages. I ended up writing my Ruby in VIM, a general text editor that's older than I am. But, both of these stand to improve in the near future.
Having read several of Paul Graham's essays, and done some reading on Wikipedia, I decided that Lisp was definitely on the list of things to learn. I've been working through Peter Seibel's Pratical Common Lisp. I have been quite pleased with it, so far.
I'm also experimenting with Haskell, but I'm in the very early stages.
I'll close with what I think is a very insightful comment from a tutorial by Eric Etheridge, "Haskell for C Programmers:"
If you play around with Haskell, do not merely write toy programs. Simple problems will not take advantage of Haskell's power. Its power shines mostly clearly when you use it to attack difficult tasks. Once Haskell's syntax is familiar to you, write programs that you would ordinarily think of as too complicated. For instance, implement graph-spanning algorithms and balanced-tree data types. At first, you'll probably trudge ahead using 'imperative thought processes'. As you work, you'll hopefully see that Haskell's tools let you dramatically simplify your code.I think this is a general truth for learning about any programming language that is new to you. Unless you try to do something hard, you'll always think, "So what, I could write that in Blub, just as easily." Most of the canonical functional programming examples found in tutorials and textbooks, like computing Fibonacci sequences and factorials, undersell their language for precisely this reason. (Incidentally, I don't really recommend the rest of the tutorial.)
So, what's my hard something? It's still a bit of a secret...