Hi, I'm Martin and I write software. I also have a hell of a lot of stuff going through my head with thoughts and opinions on many things. Unfortunately, in this whole jumble I often fail to articulate my point of view very well. This blog is an attempt to rectify that by trying to put all my thoughts on various subjects down in one place. If you want to get in touch, email me at email@example.com.
I've just finished watching David Heinemeier Hansson's keynote talk at RubyConf. It's a great talk about what makes Ruby great in his eyes and why he hasn't bothered learning other languages since discovering Ruby. There are some things that I disagree with or that contradict each other (at one point he says Ruby protects you from pointer arithmetic, but later that he likes Ruby because it doesn't stop you from doing things even though they can be dangerous if used incorrectly) but on the whole it's well worth watching.
Except for one point. It is about a sentiment that, at least to me, is shared a lot in the Ruby community (and also to a degree in the scripting language community in general). To quote DHH from the talk: "The programming equivalent of having your balls fondled when you go to the airport is… type safety". Now this post isn't going to be about type safety as such but about something that encompasses type safety, which as the title of the post suggests is: being explicit.
So lets cover types. There are really 2 major axes of typing: static vs duck, strong vs weak. Static typing is where you write type information, duck typing is where it is worked out at runtime. They are often seen as opposites but aren't necessarily mutually exclusive. Strong typing is where the typing is enforced by the compiler and/or runtime (e.g. you can't put an integer in a variable typed for a float) and is the opposite of weak typing.
Now, there are people that feel that writing any type information in code is wrong, you should just say "this is a variable" and you use introspection to work out the type at runtime. That's a perfectly valid point of view. There is another group of people who feel that you should write down every type and enforce everything. That is another perfectly valid point of view. But I find both of those extreme.
My opinion on strong vs weak typing is pretty clear. I hate strong typing. I find that it is like trying to protect yourself from bad things happening by wrapping yourself in layers of bubble wrap. Sure you're safe, but you're also restricted in your movement. I prefer weak typing as you're not required to do something. Static vs duck typing is different. I love duck typing, it gives you absolute freedom. I also love static typing though as it gives you a lot of information to build tools. This is part of why I like using Objective-C. I can choose to use the static typing for objects if I wish, but if it gets in the way I can just use the id type.
Now yes, writing out type information takes a bit longer, but I think DHH (along with many other ruby devs) has been stung by Java's pedantic-ness. Static typing gives a lot of information. It can make it far easier to write decent editors with smart autocompletion. It makes refactoring a LOT safer and more reliable. It allows for better static analysis and compiler warnings and errors. It basically makes it easier to have tools do stuff for you rather than you have to do it yourself.
You can work out types via type inference, but that is guessing. I much prefer facts to guesses and to try and be explicit rather than being implicit.
Explicitness is good, doubly so in code. Explicit code lays out its intentions and doesn't make someone have to figure out what the person who wrote it was assuming. There are many ways to be explicit: cautious coding, using static types, using obvious variable and method names, writing detailed comments for non obvious code. Where at all possible you should be doing these things and reducing where you are being implicit.
Now it's hard to get around the need to be implicit or make assumptions sometimes, which is why languages that enforce everything (eg Java) require more code to do the same thing than a language that doesn't (eg Ruby). But just because static typing can get in the way on the odd occasion doesn't mean that it is bad. If you have a variable that is going to hold a string, and you know it is going to hold a string and shouldn't be holding something else, then why not set its type as a string, so that if you accidentally set it to a different type, the compiler can warn you.
As an every day example of guesses vs facts, take buying a piece of furniture. You could guess "yeah, this table will fit in a space at home, it's roughly 4ft". But then you get home and find the 4ft gap you wanted to fit the table in is actually 3.75ft and it doesn't fit. But if you were explicit and measured the size of the space you would have a fact, which you can use to make a decision. Now things like that don't always come back to bite you, but they can and the more you are explicit, the less you'll get bitten.
So DHH used the phrase "enough rope to hang yourself" for an analogy for why enforcing stuff like typing can be bad. We don't stop people from buying long lengths of rope because one use case may be for someone to hang themselves. It's not the main use case for ropes. Now I agree, that you should't enforce something because you can do something bad with it, and you shouldn't enforce typing simply because you can mess things up. However, the reverse is also true. As much as Java enforces types, Ruby enforces the lack of types. There is no real way for me to be explicit, barring writing more code to check the type of every variable at runtime or writing unit tests.
On the one side of the argument is "you should always be explicit" and on the other side is "you should always be implicit". In language terms this is Java and Ruby respectively. Personally I think the best way to view things is "you should aim to be explicit, except where you really can't". In language terms this is Objective-C. You are persuaded to be explicit, because it makes it easier to catch mistakes, but it isn't forced upon you.
Explicitness should not be seen as restrictive and implicitness should not be seen as dangerous. If you are implying something to the extent that it can only have one outcome, then it costs nothing to be explicit. In programming that happens in the vast majority of cases. For the remaining few cases where you can't be explicit, then be implicit, but try to make it the exception rather than the norm, otherwise you're making the odds of something bad happening more likely.