Posts Tagged programming

Step Away From The Clipboard

We’ve all done it.  And we’ve almost all regretted doing it.  So its time to talk about an uncomfortable subject for many.

Copying and pasting code.

The temptation is constantly there.  You see some code here that works (or at least appears to work).  You obviously don’t want to reinvent the wheel, and maybe some aspect of the code makes re-factoring it out difficult.  Maybe you agonize about it a little, or maybe you blissfully ignore the dangers.  In the end, a Control-C and Control-V later, and that block of code has reproduced it self.  The world doesn’t tragically end, everyone you know doesn’t die of a horrible disease, and your system still continues to function.  So you figure, hey, copying and pasting code isn’t so bad, and so copy and paste that same line of code again.  And again.  And again.  And before you know it, your computer’s clipboard has become an indispensable tool.  Maybe you go even so far as to push others to follow in your copy and pasting footsteps.  I’ve seen people put in their documentation something along the lines of “if you want to do x, copy this code from file y”.  And the world still continues to go on.

Then, suddenly things change.

You find a bug in that original code you copied.  Its an easy fix, except that single bug has now reproduced like a virus, infecting your entire system.  Or maybe that code was golden, but the requirements do what they love to do so much and completely change.  Again, a quick fix to the original code will fix it, but the copying and pasting has, best case, increased the difficulty of the fix by a factor of the number of times you copied that code.  Worst case, different copies will have subtle differences (maybe some are still on an older revision of the change), maybe some are so different you are no longer able to recognize their lineage, but under layers of re-factoring the old functionality still lives on.

We all know this risk, but problem is actually more severe than what I’ve described.

Copying and pasting the same code, the same functionality, the same patterns; it all means you’ve missed a chance to abstract out some part of your system.  If you have dozens of classes that follow nearly the same pattern, there is something important about that pattern that needs to be captured at a higher level.  Not only will that make your code more maintainable, abstracting out these patterns makes it easier to reason about.  You can make better deductions about code that is made up of higher abstractions than code that simply looks similar to that other code over there.

And of course in many cases, copying and pasting code is a symptom of a larger problem, a lack of understanding of the original code.  It is too easy to find code that does something similar to what you want it to do, and then copying it verbatim.  But is that “something similar” exactly what you want?  Maybe its doing something subtly different, something you might not need or even want.  Maybe there were valid assumptions made when that code was originally written that you have no business making.  But if you took the easy route and just copied it without understanding it, you will have no idea that is the case

In other cases its a sign of laziness.  Not the type of laziness that avoids work, in fact re-factoring to prevent the need for a copy and paste job is usually less work.  But a more intellectual type of laziness.  Work that just involves repeating something someone else did is easy, whereas if you can easily get past that easy part you are stuck with more challenging work.  Moving code around is easy, solving problems is the hard part.

Of course its not always your fault.  Maybe you are working in a language that makes certain kinds of patterns difficult to abstract, maybe even to the point it appears its actively resisting the concept of you being productive (I suppose it could be worse for Lambda, considering what is happening with Jigsaw).  Are there are plenty of times when what you are copying actually is too small to be successfully reused.  I’m not saying never copy and paste code, or never reuse the same patterns or functionality.  Just next time you catch yourself doing it, please stop and ask yourself the following question:

Is there a better way to do this?

, , ,

4 Comments

Programming in a keyboard-less world

Just the other day, my brand new Transformer Prime tablet arrived.  Aside from being a high quality tablet (quad core processor, one of the very first to offer Android 4.0), it is well known for having a docking station accessory, complete with a keyboard, that essentially transforms it from a tablet to a 10 inch net-book.  My phone, a HTC G2, also has a fold out keyboard, as did its predecessor, my old G1 phone.  So I think its safe to say, I am a fan of physical keyboards.  Sure, voice recognition can be good for some things, and Swype produces a nice on screen keyboard, but if I want to type anything of substance, I’m much more comfortable typing it on a nice hard QWERTY keyboard with actual buttons I can press.

Which makes the fact that I’m writing this post a bit ironic.

There was an interesting podcast last week from the IEEE about the keyboard going the way of the typewriter.  Of course I was rather dismayed by the thought.  Its not just that oncreen touch keyboards will replace them, but that new input devices, such a stylus with handwriting recognition or a microphone with voice recognition, will be the computer input of the future.

I would argue we are nowhere close to that with today’s technology, at least from where I stand.  Being that my Transformer keyboard hasn’t arrived, I originally tried to “write” this blog post with my tablet’s “voice keyboard”, and I couldn’t get through the first paragraph without getting frustrated and giving up.  I haven’t really tried using a stylus recently, but handwriting such as mine is typically so bad even I have trouble reading it, so I won’t begrudge any computer program which can’t read it (and before you try to say that’s just because I’m so used to keyboards I’ve lost the ability to write neatly, my grade school teachers would be quick to point out I never had that skill, even before I learned to type).  On the other hand, I can type at a reasonably fast pace with pretty good accuracy, so there is no debate on which method is more proficient for me.

But one argument made on the podcast was that kids today will likely grow up so used to voice recognition and handwriting recognition that they may view keyboards as obsolete.  That they may offer a technically superior method of writing fast will not matter to them.  After all, one could easily argue that command line interfaces can be much more productive that GUIs for many tasks, but outside of hard core hackers, the world has largely moved away from them.  Even software developers have largely embraced tools such as Eclipse as an alternative to hacking on the command line.

And I can’t deny that there are some areas which keyboards are not very good at.  For instance, look at writing math problems.  Math is typically full of Greek letters, superscript/subscripts, and other things which are just plain hard to type.  Sure, there are usually obscure keyboard shortcuts for them, and specialized software for it (such as Mathematica), but no real general purpose solution.  When I was trying to take notes for the Stanford Machine Learning class on Evernote last year, I can’t tell you how much time I wasted trying to come up with notations for random symbols that kept on coming up.

And of course more creative endeavors such as building “mind maps”, that is just hard to do without a more free flow input format.  That’s why many still argue that pen and paper is a superior note taking device.  Keyboards are great for writing lines of text using a small set of well known characters, but are rather limited beyond that.

So as keyboard-less input becomes more and more mainstream, how will that affect computer programming?  Today, programming is a perfect example of lines of text optimized for keyboard input.  Using voice recognition to write a Java program?  How insane would that be?  “For, begin paren, double, no capital ‘D’ double input colon input-capital-V-vals, end paren, open bracket” instead of just typing:

for (Double input: inputVals) {

Case sensitivity, the frequency of special characters and common symbols, terse variable names, camelCase, none of that will work with voice recognition input.  Computer programming is clearly not a place where you want creative, free form input, but you want to heavily restrict it to what are legal values.

Or is it?

Will computer languages evolve to utilize the advantages of newer input methods.  Will they start to incorporate more free-form writing rather than just plain text?  Will it even be possible to come up with languages like that?  Or will future freshman computer science students have to spend hours learning ancient typing techniques that have become obsolete outside of writing programs?

I suppose time will tell.

,

5 Comments

ClojureScript announced

Rich Hickey and the Clojure/core team just announced (ok, they announced it last week, but I was on vacation then, so I actually have an excuse to be blogging late) a new project, ClojureScript, a Clojure to JavaScript compiler.  With Clojure being a great language to develop and JavaScript being nearly ubiquitous for web programming (though not always that great to program in), it does certainly look interesting.  I was a little disappointed as when I heard rumors of it at the TriClojure meetup earlier this month, I was hoping it would be an Android library (that wasn’t slow and didn’t add a few megs to the app), but I will have to play with it some.  I haven’t done much with JavaScript in several years (for which I am thankful for), but this should give me an excuse to get back into it.

The “Clojure”/”Google Closure” thing is going to get a bit confusing though.

, ,

Leave a comment

Clojure Tip of the Day

Avoid writing testcases that compare data structures that include infinite lazy sequences.  It kinda does what you would expect…

And yes, this is speaking from experience.

, ,

Leave a comment

AI and the Future of Programming

I was listening the latest Java Posse roundup recording on the “Future of Software” in which the topic of AI came up, and couldn’t help but chime in on a few things.

The basic question being posed was, will computers eventually become smart enough to write their own software.  Some argued no, that there are fundamental limitations on how computers work that will prevent them from, say designing usable user interfaces.  Others argued yes, a sufficiently advanced computer could do anything a human being could do.  Still others argued that they already were writing their own software, at least relative to computer programming was fifty years ago.

My answer, yes, and no.

Yes, I would agree that barring a fundamentally dualistic nature of mind and matter, a computer can (theoretically at least) do anything a human can (Ray Kurzweil has plenty to say on that subject if you are interested).  So given sufficiently advanced technology, you could develop a computer that does everything we do.  Yes, that technology will likely have little in common with today’s computer chips, as the human nervous system has a fundamentally different architecture than modern day computers.  But that doesn’t mean its not possible to develop a computer system based on those architectures (though I suppose one could argue that then the term ‘computer’ wouldn’t be the best description of it, as arguably ‘computing’ isn’t what it would be doing).

However, just because we could do something doesn’t mean we would do it.  What exactly would be the point of designing a computer identical to a human being?  We’ve got too many real life human beings running around already.  What we would want would be a computer better than human beings.  We wouldn’t be designing them for the hell of it, we would be designing them to solve problems we have.  So certain aspects of human nature wouldn’t make sense to duplicate.  Hatred, as an obvious example.  Panicking in severe situations is another.  And most importantly, freedom of desire.

I’m not going to design a computer program or robot that is going to want to serve its own desires, at least not in the way humans do.  I am designing it to serve my desires.  Sure, given what is said in the previous paragraphs, it should be possible to design a real life Hedonism Bot.  But why on Earth would anyone want to?  To be useful, it would have to be designed to care about its maker’s (that’s us!) desires.  And that brings us to a part of the software engineering process that will have to continue to be owned by human beings; the creation of needs for which the software will be created for.

Even that isn’t as trivial as it sounds.  I don’t care if the automatic software generator is ten times as intelligent as human engineers, its still not going to be able to solve the problem of being given requirements that are too vague any more than its carbon based equivalents could.  In order for it to generate the software, the requirements for that software are going to have to be drawn out.  What is the desired flow?  How should it handle errors?  What special cases is it going to need to handle?  And developing these requirements is going to end up being the programming of the future.  Its probably going to be much more natural than what we write today, just as what we write today is much more natural than the programs written fifty years ago.  But there will remain a human element.

, , ,

Leave a comment

Musings of an Object Oriented Apostate

Like probably most software developers of my generation, I was taught from the very beginning of my education that Object Oriented Programming is the ideal programming paradigm for any significantly large application.  C++ was the first language I learned way back in high school (well, not counting the TI-Basic language on my TI-82 calculator), so OOP has been there from the start.  In college there was an comparative languages class that discussed different programming paradigms, but since I had already been convinced of the superiority of object oriented languages like Java I opted to take the numerical methods class instead.  I was a full fledged member of the Church of Object Oriented Programming.

Now here I am, a little six years out of school, and seriously questioning the faith.  The one thing that probably began my conversion was concurrency.  No, I don’t mean the type of concurrency needed for parallelized applications running on the massive multi-core systems we have been promised that will push out Moore’s law for another decade or so.  I mean the type of concurrency needed for any system operating with multiple threads.  The issues surrounding concurrency in OOP have been discussed many times before, so I won’t go over them here.  But since I’ve began playing with functional languages such as Clojure and Erlang, not only have I found concurrent programming easier, but many aspects of program design that would have been awkward in the OO world now seem, well, easy.  If you are like I was just a few years ago you might scoff at that idea (I know I would have).  I really don’t have a irrefutable argument that is guaranteed to convince you of this, but my best advice would be to try it yourself if you are not convinced.

Steve Yegge’s Kingdom of Nouns essay painted a good (and humorous at times) picture of why the functional abstraction is advantageous over the pure object oriented abstraction.  But even after reading that essay, part of me still felt that an object model, while maybe not the most accurate way to model software, was still the most natural.  When I looked at the world, for instance, I tend to see objects first, not actions.  I see a bird flying.  I see a traffic sign in front of me.  I see a car whose driver is honking at me.  Ok, maybe while driving to work is not the best place to contemplate the advantages of different programming paradigms.  But the fact remains, when I model the real world, I start with the objects.  Its only natural to feel a desire to do the same with software.

But then something occurred to me.  That may be how I internally model the real world around me, but it is certainly not the frame of mind I am in when I interact with it.  When I’m on the on ramp to the highway, I think “Step on the gas pedal”, or at a higher abstraction level, “Speed up“.  If the car in front of me suddenly stops, I think “Step on the break pedal”, or “Stop“.  At least if enough of my brain is still paying attention to the traffic and not thinking about computer programming, of course.  When I’m viewing the world I may think in objects first, but when interacting with it, the verbs get my attention.  Maybe that’s why OOP does such a good job at creating intricate UML diagrams, but fails so miserably when it comes to actually writing software that does something.

Now I brought this analogy up in a Java mailing list once, and almost immediately I had people responding saying I was modeling (that word again) it wrong.  I wasn’t thinking “Step on the gas pedal”, I was really thinking “Right foot, step on the gas pedal”.  Well, let me assure you, that’s not how I think.  Certainly you can model it that way, I never said there was not an object in play.  It just does not have my complete focus.  To be honest, I don’t really care what it is that steps on the gas pedal, I just want something to do it.  My right foot is just the part of my body that has been accustomed to being in charge of the gas and break pedals due to the configuration of modern automobiles. Now you can argue that some people, who having been brought up on OOP and who are still devout followers, may have a view of reality in which they really do think in terms of objects first, and thus when performing an action they mentally must first identify what it is that will perform that action.  Thats just not how I think, and I strongly suspect is not how most people naturally think.

, ,

Leave a comment