January 25, 2010
How much is a good idea worth? According to Derek Sivers, not much:
It's so funny when I hear people being so protective of ideas. (People who want me to sign an NDA to tell me the simplest idea.) To me, ideas are worth nothing unless executed. They are just a multiplier. Execution is worth millions.
To make a business, you need to multiply the two. The most brilliant idea, with no execution, is worth $20. The most brilliant idea takes great execution to be worth $20,000,000. That's why I don't want to hear people's ideas. I'm not interested until I see their execution.
I was reminded of Mr. Sivers article when this email made the rounds earlier this month:
I feel that this story is important to tell you because Kickstarter.com copied us. I tried for 4 years to get people to take Fundable seriously, traveling across the country, even giving a presentation to FBFund, Facebook's fund to stimulate development of new apps. It was a series of rejections for 4 years. I really felt that I presented myself professionally in every business situation and I dressed appropriately and practiced my presentations. That was not enough. The idiots wanted us to show them charts with massive profits and widespread public acceptance so that they didn't have to take any risks.
All it took was 5 super-connected people at Kickstarter (especially Andy Baio) to take a concept we worked hard to refine, tweak it with Amazon Payments, and then take credit. You could say that that's capitalism, but I still think you should acknowledge people that you take inspiration from. I do. I owe the concept of Fundable to many things, including living in cooperative student housing and studying Political Science at Michigan. Rational choice theory, tragedy of the commons, and collective action are a few political science concepts that are relevant to Fundable.
Yes, Fundable had some technical and customer service problems. That's because we had no money to revise it. I had plans to scrap the entire CMS and start from scratch with a new design. We were just so burned out that motivation was hard to come by. What was the point if we weren't making enough money to live on after 4 years?
The disconnect between idea and execution here is so vast it's hard to understand why the author himself can't see it.
I wouldn't call ideas worthless, per se, but it's clear that ideas alone are a hollow sort of currency. Success is rarely determined by the quality of your ideas. But it is frequently determined by the quality of your execution. So instead of worrying about whether the Next Big Idea you're all working on is sufficiently brilliant, worry about how well you're executing.
The criticism that all you need is "super-connected people" to be successful was also leveled at Stack Overflow. In an email to me last year, Andy Baio -- ironically, the very person being cited in the email -- said:
I very much enjoyed the Hacker News conversation about cloning the site in a weekend. My favorite comments were from the people that believe Stack Overflow is only successful because of the Cult of Atwood & Spolsky. Amazing.
I don't care how internet famous you are; nobody gets a pass on execution. Sure, you may have a few more eyeballs at the beginning, but if you don't build something useful, the world will eventually just shrug its collective shoulders and move along to more useful things.
One of my all time favorite software quotes is from Wil Shipley:
This is all your app is: a collection of tiny details.
In software development, execution is staying on top of all the tiny details that make up your app. If you're not constantly obsessing over every aspect of your application, relentlessly polishing and improving every little part of it -- no matter how trivial -- you're not executing. At least, not well.
And unless you work alone, which is a rarity these days, your ability to stay on top of the collection of tiny details that makes up your app will hinge entirely on whether or not you can build a great team. They are the building block of any successful endeavor. This talk by Ed Catmull is almost exclusively focused on how Pixar learned, through trial and error, to build teams that can execute.
It's a fascinating talk, full of some great insights, and you should watch the whole thing. In it, Mr. Catmull amplifies Mr. Sivers' sentiment:
If you give a good idea to a mediocre group, they'll screw it up. If you give a mediocre idea to a good group, they'll fix it. Or they'll throw it away and come up with something else.
Execution isn't merely a multiplier. It's far more powerful. How your team executes has the power to transform your idea from gold into lead, or from lead into gold. That's why, when building Stack Overflow, I was so fortunate to not only work with Joel Spolsky, but also to cherry-pick two of the best developers I had ever worked with in my previous jobs and drag them along with me. Kicking and screaming if necessary.
If I had to point to the one thing that made our project successful, it was not the idea behind it, our internet fame, the tools we chose, or the funding we had (precious little, for the record).
It was our team.
The value of my advice is debatable. But you would do well to heed the advice of Mr. Sivers and Mr. Catmull. If you want to be successful, stop worrying about the great ideas, and concentrate on cultivating great teams.
January 18, 2010
Have you ever opened a simple little ASCII text file to see it inexplicably displayed as onegiantunbrokenline?
Opening the file in a different, smarter text editor results in the file displayed properly in multiple paragraphs.
The answer to this puzzle lies in our old friend, invisible characters that we can't see but that are totally not out to get us. Well, except when they are.
The invisible problem characters in this case are newlines.
Did you ever wonder what was at the end of your lines? As a programmer, I knew there were end of line characters, but I honestly never thought much about them. They just â€¦ worked. But newlines aren't a universally accepted standard; they are different depending who you ask, and what platform they happen to be computing on:
|DOS / Windows||CR LF|
The Carriage Return (CR) and Line Feed (LF) terms derive from manual typewriters, and old printers based on typewriter-like mechanisms (typically referred to as "Daisywheel" printers).
On a typewriter, pressing Line Feed causes the carriage roller to push up one line -- without changing the position of the carriage itself -- while the Carriage Return lever slides the carriage back to the beginning of the line. In all honesty, I'm not quite old enough to have used electric typewriters, so I have a dim recollection, at best, of the entire process. The distinction between CR and LF does seem kind of pointless -- why would you want to move to the beginning of a line without also advancing to the next line? This is another analog artifact, as Wikipedia explains:
On printers, teletypes, and computer terminals that were not capable of displaying graphics, the carriage return was used without moving to the next line to allow characters to be placed on top of existing characters to produce character graphics, underlines, and crossed out text.
So far we've got:
- Confusing terms based on archaic hardware that is no longer in use, and is confounding to new users who have no point of reference for said terms;
- Completely arbitrary platform "standards" for what is exactly the same function.
Pretty much business as usual in computing. If you're curious, as I was, about the historical basis for these decisions, Wikipedia delivers all the newline trivia you could possibly want, and more:
CR+LFwas in common use on many early computer systems that had adopted teletype machines, typically an ASR33, as a console device, because this sequence was required to position those printers at the start of a new line. On these systems, text was often routinely composed to be compatible with these printers, since the concept of device drivers hiding such hardware details from the application was not yet well developed; applications had to talk directly to the teletype machine and follow its conventions. The separation of the two functions concealed the fact that the print head could not return from the far right to the beginning of the next line in one-character time. That is why the sequence was always sent with the CR first. In fact, it was often necessary to send extra characters (extraneous CRs or NULs, which are ignored) to give the print head time to move to the left margin. Even after teletypes were replaced by computer terminals with higher baud rates, many operating systems still supported automatic sending of these fill characters, for compatibility with cheaper terminals that required multiple character times to scroll the display.
CP/M's use of
CR+LFmade sense for using computer terminals via serial lines. MS-DOS adopted CP/M's
CR+LF, and this convention was inherited by Windows.
This exciting difference in how newlines work means you can expect to see one of three (or more, as we'll find out later) newline characters in those "simple" ASCII text files.
If you're fortunate, you'll pick a fairly intelligent editor that can detect and properly display the line endings of whatever text files you open. If you're less fortunate, you'll see onegiantunbrokenline, or a bunch of extra
^M characters in the file.
Even worse, it's possible to mix all three of these line endings in the same file. Innocently copy and paste a comment or code snippet from a file with a different set of line endings, then save it. Bam, you've got a file with multiple line endings. That you can't see. I've accidentally done it myself. (Note that this depends on your choice of text editor; some will auto-normalize line endings to match the current file's settings upon paste.)
This is complicated by the fact that some editors, even editors that should know better, like Visual Studio, have no mode that shows end of line markers. That's why, when attempting to open a file that has multiple line endings, Visual Studio will politely ask you if it can normalize the file to one set of line endings.
This Visual Studio dialog presents the following five (!) possible set of line endings for the file:
- Windows (CR LF)
- Macintosh (CR)
- Unix (LF)
- Unicode Line Separator (LS)
- Unicode Paragraph Separator (PS)
The last two are new to me. I'm not sure under what circumstances you would want those Unicode newline markers.
Even if you rule out unicode and stick to old-school ASCII, like most Facebook relationships â€¦ it's complicated. I find it fascinating that the mundane ASCII newline has so much ancient computing lore behind it, and that it still regularly bites us in unexpected places.
If you work with text files in any capacity -- and what programmer doesn't -- you should know that not all newlines are created equally. The Great Newline Schism is something you need to be aware of. Make sure your tools can show you not just those pesky invisible white space characters, but line endings as well.
January 10, 2010
The end result, to my mind, is a device that occupies an uncomfortable, middle ground between laptops and smartphones that tries to please everyone and pleases no one. Consider the factors:
To summarize: Slightly bigger and pricier than a phone, but can't phone. Slightly smaller and cheaper than a laptop, but not that much smaller or cheaper. To adapt a phrase I used in an article I wrote yesterday, netbooks are like laptops, but lamer.
- Size: A bit too large to go into your pocket; a bit too small for regular day-to-day work.
- Power: Slightly more capable than a smartphone; slightly less capable than a laptop.
- Price: Slightly higher than a higher-end smartphone but lacking a phone's capability and portability; slightly lower than a lower-end notebook but lacking a notebook's speed and storage.
This is so wrongheaded I am not sure where to begin. I happen to agree with Dave Winer's definition of "netbook":
- Small size.
- Low price.
- Battery life of 4+ hours. Battery can be replaced by user.
- Built-in wifi, 3 USB ports, SD card reader.
- Runs my software.
- Runs any software I want; no platform vendor to decide what's appropriate.
- Competition. Users have choice and can switch vendors at any time.
Netbooks are the endpoint of four decades of computing -- the final, ubiquitous manifestation of "A PC on every desk and in every home". But netbooks are more than just PCs. If the internet is the ultimate force of democratization in the world, then netbooks are the instrument by which that democracy will be achieved.
No monthly fees and contracts.
Nobody telling you what you can and can't do with your hardware, or on their network.
To dismiss netbooks as like laptops, but lamer is to completely miss the importance of this pivotal moment in computing -- when pervasive internet and the mass production of inexpensive portable computers finally intersected. I'm talking about unlimited access to the complete sum of human knowledge, and free, unfettered communication with anyone on earth. For everyone.
It's true that smartphones are slowly becoming little PCs, but they will never be free PCs. They will forever be locked behind an imposing series of gatekeepers and toll roads and walled gardens. Anyone with a $199 netbook and access to the internet can make free Skype videophone calls to anywhere on Earth, for as long as they want. Meanwhile, sending a single text message on a smartphone costs 4 times as much as transmitting data to the Hubble space telescope.
I don't care how "smart" your smartphone is, it will never escape those corporate shackles. Smartphones are simply not free enough to deliver the type of democratic transformation that netbooks -- mobile PCs cheap enough and fast enough and good enough for everyone to afford -- absolutely will.
That's why I love netbooks. In all their cheap, crappy glory. And you should too. Because they're instruments of user power.
The truly significant thing is this -- the users took over.
Let me say that again: The users took over.
I always say this is the lesson of the tech industry, but the people in the tech industry never believe it, but this is the loop. In the late 70s and early 80s the minicomputer and mainframe guys said the same kinds of things about Apple IIs and IBM PCs that Michael Dell is saying about netbooks. It happens over and over again, I've recited the loops so many times that every reader of this column can recite them from memory. All that has to be said is that it happened again.
Once out, the genie never goes back in the bottle.
Netbooks aren't an alternative to notebook computers. They are the new computers.
Cheap and crappy? Maybe those early models were, but having purchased a new netbook for $439 shipped, it is difficult for me to imagine the average user ever paying more than $500 for a laptop.
For the price, this is an astonishingly capable PC:
- Dual Core 1.2 GHz Intel CULV Celeron processor
- 2 GB RAM
- Windows 7 Home Premium
- 11.6" screen with 1366 x 768 resolution
- Thin (1") and light (3.5 lbs)
- Good battery life (5 hours)
- 3 USB ports, WiFi, webcam, gigabit ethernet
Windows 7 is a fine OS, but this machine would surely be cheaper without the Microsoft Tax, too.
The Acer Aspire 1410 isn't just an adequate netbook, it's a damn good computer. At these specifications, it is a huge step up from those early netbook models in every way. But don't take my word for it; read the reviews at netbooked and Liliputing. (Caveat emptor -- there are lots of 1410 models, and the newer dual core CPU version is the one you want.)
That's why the current Intel CULV CPUs are far more attractive options -- they're dramatically faster, and have become power-efficient marvels. I hooked up my watt meter to this Aspire 1410 and I was surprised to find it consume between 13 and 16 watts of power in typical use -- while my wife was browsing the web in Firefox, over a wireless connection, with multiple tabs open. I fired up Prime95 torture test to force the CPU to 100% load, and measured 21 watts with one CPU core fully loaded, and 26 watts when both were. These are wall measurements which reflect power conversion inefficiencies of at least 20%, so real consumption was between 10 and 20 watts. I was wondering why it ran so cool; now I know. It barely uses enough power to generate any heat!
Modern netbooks are not cheap and crappy. They're remarkable computers in their own right, and they're getting better every day. Which makes me wonder:
A recurring question among Apple watchers for decades has been, Ã¢â‚¬Å“When is Apple going to introduce a low-cost computer?
Steve Jobs answered that decades-old complaint by stating, "We don't know how to build a sub-$500 computer that is not a piece of junk."
They may be pieces of junk to Mr. Jobs, but to me, these modest little boxes are marvels -- inspiring evidence of the inexorable march of powerful, open computing technology to everyman and everywhere.
We have produced a democracy of netbooks. And the geek in me can't wait to see what happens next.