By Michael Woloszynowicz

By Michael Woloszynowicz

Saturday, November 27, 2010

3 Groups You Must Design Your Application For... or Else

I recently completed the task of redesigning and implementing a large component in our web-based application. In doing so I have learned that you have to design your application for three groups of users and have an intimate knowledge of what creates value to each one. Before I go into detail about these three groups I'd like to give you a little bit of background on how I got to this point without going into the details of the implementation.

My goal was to create a more flexible, portable, modern, and full featured tool than the clunky third-party Java Applet we were using to date. After 6 months of back-end and front-end coding (see Dojo Lessons for some front-end programming tips I learned in the process) I produced what I and my co-workers saw as a tremendous advancement from what we had. The new tool had all the things I set out to achieve but the end-user response was lukewarm at best. How could this be I wondered, can't people see that this tool vastly better than what they had before? The navigation was improved, the amount of useful options was expanded, it was more reliable, and most importantly it allowed us to implement some of the functionality that these very customers have been asking us for, but we were unable to add due to the restrictions imposed by the existing tool. After some feedback from our Beta version we discovered that most of the issues arose from one aspect of the tool which users found more cumbersome and time-consuming than what they had grown accustomed to. While 95% of the functionality was improved, to the end-user the remaining 5% superseded all of this and rendered our new tool a dud. Our end users didn't care about the added functionality, the fact that it was AJAX based vs. Java based, or the fact that we could now implement their other desired features. The reason is that this small component consumed 95% of their use and time, so no matter how great everything else was, if it wasn't as easy as before we couldn't convince users to use it. In the interest of keeping our customers happy I proposed a second alternative which would save time but still maintain portability. We were sure this new version would resolve any objections. Sure enough it wasn't, it was still a bit more time consuming than what was offered by the basic existing applet so users still complained. Finally I accepted the fact that we had to add a third option using a new Java Applet that would restore the previous functionality while still giving us all of the advantages of our new tool. We now have three options for completing the users crucial task, each with varying levels of portability but without sacrificing our initial goals.

Now on to the primary point of this post. When I set out in designing this tool I had considered the existence of the three groups but underestimated the importance of each one. First off we wanted to design for the end-users' managers as they are the ones we sell the software to. These managers wanted reliability and improved performance that would result in a better consumer facing experience. We then designed for the end-user by trying to design an interface that was similar to tools they use on a daily basis and provide some much needed functionality enhancements in an easy to use package. Finally, we designed for ourselves by keeping future expansion and value enhancing features in mind. We knew from the start that the one element of the new tool was slightly less efficient than it was before but we also knew that this was the way most companies did it, and that the other improvements would surely convince people to overlook this. This thinking was correct in the context of new clients who loved the new tool, but as we know, people are not amenable to change and existing end-users complained furiously. What we did right in this process was slowly roll out the new tool, all while providing access to the old one, while actively soliciting feedback. Although the response we got was not what we wanted to hear, it prevented us from loosing a good chunk of our clients and allowed us to improve the user experience.

The lesson therefore is not to underestimate the power of the end-user and always find out what the most crucial aspects of your application are. Remember that what is seen as a value driver to you or to a purchasing manager or exec, is not always the case for an end-user. The biggest source of value for the end-user is having your software help them accomplish their task quickly and easily, and if it fails to do so they will complain to their managers, complain to their co-workers, and go from advocates to saboteurs in an instant. One exception to this is enterprise software where initiatives are often assigned in a top down manner with end-users having little say, hence the dismal usability of many ERP systems. Those however that target small to medium size businesses with a subscription based service must remember to identify the sources of value for every class of user and incorporate them into your designs. Design for the buyer, the user, and yourself, be proactive in soliciting continuous feedback, and remember that little changes can make a world of difference.

If you liked this post please follow me on Twitter for more.

Sunday, November 21, 2010

Web 2.0 Businesses, a Bubble, a Boom, or Just Crazy?

In the recent conversation between John Doerr and Fred Wilson at the Web 2.0 Summit, the discussion of whether we are on the verge of a new bubble in the internet company investment environment came up. The argument is that time tested standards on valuations no longer apply and even small firms with little more than an idea are getting massive valuations. This conversation has prompted me to think about whether we are in a bubble or boom, what the effects of the rising valuations are, and whether we will see a repeat of the dot com bubble a decade ago?

The Tech Crunch page that featured a video of this discussion had an astute comment from one of the readers who noted the one of the key differences between today and a decade ago is that today we are dealing with private companies whereas in 2000 we were dealing with public firms. Let's begin by carrying on this argument and discussing the differences between these two firm types:
  • Since the firms are private, any damage is contained to a smaller group of investors such as the VC firms and fund participants, angel investors, founders, and employees holding shares or options
  • Private companies do not disclose financials so we can't tell just how inflated the valuations really are
  • Valuations are determined through negotiations with a small group of players whereas public share prices are determined by a large pool of investors in an open market
  • The liquidity of private shares is much lower (although increasing for popular companies) than it is for public shares so we won't see any rapid trading
  • Because of a number of reasons, including high costs, signaling, etc. public firms tend not to issue equity frequently, while private firms often undergo several series of funding so valuation has a profound impact at every financing round
With the above differences it is clear that a bubble would develop in a somewhat different manner than it did a decade ago. The origins of the bubble lie within the VC firms which are seeing the popularity of their funds rising as a result of low interest rates and an improving economy. After the significant reduction in VC investment in 2008 and 2009, they are looking to increase the size of their portfolios and searching for runaway hits in hot sectors like mobile, geolocation, cloud computing, etc. Over the last two years we've seen many announce the death of venture capital as IPO exits were more or less nonexistent. Although IPO activity is still fairly stagnant for a multitude of reasons, VC's are now driven by the prospect of private sale exits. With large players such as Google, Apple and Microsoft looking to expand horizontally beyond their core business, and with an unprecedented amount of cash on hand, VC's are hoping that acquisitions will provide a way out. We're already seeing quite a bit of buyout offers with a potential Google-Gowalla deal looking to be one of the bigger ones this year. Another worry is that we're seeing the liquidity of private shares increasing in the secondary markets which further increases valuations on a wider level. In addition to rising competition between top-tier VC firms, they are also facing competition from a growing number of "super angels" that have become popular thanks to their smaller ownership requirements and lower appetite for go big or go home strategies. Due to their resource dependence, venture capital firms target high potential businesses that can offer 10x and above returns, and as a result they tend to invest large sums of money to ramp up development and increase growth rates. This is not necessarily healthy as firms that service unknown markets should operate in a lean manner and focus on learning the market and what a winning model is rather than questing for rapid growth on an unproven business model. This is precisely the approach that Y Combinator favors as they prefer investing a small amount of capital in startups and providing advice and guidance to help firms hone in on a viable model. The Y Combinator strategy is inline with the tenants of the lean startup, and makes sense now more than ever as startup costs have been driven down drastically thanks to a multitude of open source technologies and cloud computing services. The other problem with the increased flow of VC money is that it reduces the motivation for firms to turn cash-flow positive by monetizing their efforts quickly, and instead encourages them to focus on growing their user bases. 

This leads to one of the parallels to the dot com bubble and that is that we're valuing companies that are in most cases not profitable. The valuations are therefore based on expected payoff from a large and rapidly growing user base. Further increasing risk is the fact that in many cases the markets that these firms operate in are unproven and only developing. Fourquare is a perfect example of this, as it doesn't have a revenue model and operates in a market that is being flooded with competitors, all of whom are trying to find a sustainable business model in that market. Given a lack of cash flows and huge risks to those cash flows, financial models are discarded and replaced with euphoria and optimism. That's not to say that all large investments in businesses with unproven business models are silly. In some cases they are justifiable. In fact, if we didn't have aggressive VC's looking for the next big thing, we wouldn't have many of the free services we love today. What I am suggesting is that we simply ask ourselves whether a company like Twitter really needs to have 300 employees at its current stage, and whether a small early stage startup really needs a cash infusion of several million? Can we accomplish the same result with a little more restraint and a greater focus on fundamentals? 

The problem is that there is no easy way to cool the market down. Even if VC's were to collude in an effort to drive down the prices of new deals, the negative signal this sends would also reduce the value of their existing holdings. Whether we're dealing with a bubble or a boom, the access to capital is great for startups and things will continue to operate well, provided the flow of capital continues. If however we find ourselves in the midst of a double dip recession, those firms that have learned to run lean and generate positive cash flows will emerge unscathed, while the exuberant bunch may find themselves starved for capital. A positive note is that the public markets are far less frothy. Public tech companies are trading at far more reasonable levels than we saw at the peak of the market near the end of 2007. Companies like Google and Yahoo were trading at trailing PE's of 53 and 49 respectively at the end of 2007 and now trade at more reasonable levels of 24 and 22 respectively. Regardless of the outcome, the greater population can take solace in the fact that any fallout will be contained to a small group of players, and thanks to the growing tide of super angels, we'll be left with lots of tech startups that chose control over optimism. 

If you liked this post please follow me on Twitter for more

Saturday, November 13, 2010

The Dojo Toolkit - Lessons from the Trenches

Having spent the last few months working heavily with the Dojo toolkit to develop a large RIA document management tool I thought I'd share some of the lessons that I learned along the way. For those who haven't heard of Dojo, it is a JavaScript based RIA toolkit similar in nature to JQuery and Open Laszlo.

To get started let's discuss why I chose to use Dojo instead of the more popular JQuery toolkit. The primary motivators for me were as follows:
  • Dojo's syntax is very easy to read, write, and understand
  • Dojo has a fantastic collection of widgets that cover 90% of your needs
  • Dojo's class system turns JavaScript into a real programming language (more on this later)
  • Dojo does pretty much everything that JQuery does
  • On demand package loading means you can load what you need, when you need it
  • Comes with built in developer tools
To make this an objective post, there are a few downsides to Dojo that I've come accross:
  • It's not as popular as JQuery so it's harder to find new hires that already know it
  • Dojo is not as mature JQuery so documentation and code samples are not as easy to come by
  • Dojo is significantly larger than JQuery so it's easy to learn but hard to master
  • Dojo can become slow when writing a large application with lots of widgets or consuming a great deal of data (more on this later)
If you've made it to this point you're either already a Dojo coder or you're intrigued by its possibilities. If you are considering trying Dojo, my personal opinion is that you should. Having used both JQuery and Dojo, Dojo wins hands down as the better toolkit, a sentiment that is shared by many of co-workers. This post is not intended to be a rigorous tutorial on Dojo - there are many great books on this such as Dojo: The Definitive Guide - what I intend to do is give you some high level tips that books typically omit.

Tip 1: Use the Dojo class system
Dojo's class system is fantastic. It allows you to write proper Object Oriented code that you can nicely organize in a package structure. No matter how small a component you may be writing you should always encapsulate it in a Dojo class so that you can later extend it and load it on demand using Dojo's package loader. For those here that are back-end programmers I don't think I have to preach about the benefits of OO programming. Dojo provides it so just use it, it will only take you 30 seconds longer to wrap your code in a class. Dojo does allow for multiple inheritance and I encourage you to use it, but for the sake of your co-workers, document method uses of methods that are defined in parent classes. Unlike back-end languages it's difficult to trace the source of methods that originate up a complex inheritance chain.

Tip 2: Internationalize from the start
Dojo provides a great set of tools for internationalization (i18n). Even if you have no near term plan of supporting multiple languages, it's wise to house all of your copy into one or more master documents should the need for i18n arise. Although i18n is usually tedious, the tools provided by Dojo make it as easy as it gets. Doing this will only add a extra hour or two for each month of work.

Tip 3: REST
Dojo's AJAX package and widgets are heavily geared towards RESTful web-services which is both good and bad. The downside is that if you are trying to reuse non-REST back-end code, you may have to modify the behavior of some of the widget data stores (this is neither fun nor easy). The upside is that the support provided for RESTful web-services is outstanding right out of the box. If you're starting out with a new application I highly recommend you write your application as a RESTful API and build your front-end on top of that API (e.g. Twitter) using Dojo. This will not only make your front-end code clean but you'll simultaneously be developing a platform, not just an application.

Tip 4: Events, events, and more events
Dojo's architecture is heavily event driven, both with standard events and pub/sub style syndication. While event handling used to be tedious affair, it can now be handled with ease using built in tools such as dojo.connect [connect to an event] and dojo.hitch [bind a method call to a specific context]. I therefore highly recommend designing your objects and applications around an events based architecture. If you're tying multiple classes together, rather than passing object or method pointers across these classes, simply trigger events and have sibling or consuming classes connect to them. This allows your components to maintain a loose coupling resulting in a more robust application. Carefully decide when to use standard events vs. pub/sub as there are noticeable differences in performance between these two. If you really do need to broadcast an event to a wide audience where establishing individual connections would be too onerous, go for pub/sub, otherwise stick with standard events.

Tip 5: Go to the source
As I've mentioned, the documentation for Dojo is good but not great. The Dojo team does their best to maintain an up-to-date API and Reference Guide, also Dojocampus provides lots of examples but often times you just can't find what you're looking for. I've found that the best reference guide for Dojo is the source code itself. It is well documented but can be tricky to read. As described above, Dojo is heavily events based so when going over the source it's important to understand what events are being are being thrown and which objects are attached to those events. Debugging Dojo is not easy from the outset but once you learn its structure, it's quite consistent across its various components (with a few exceptions). Spend a day or two going through the source as this will pay off greatly in the long run. As an added bonus, Dojo was written by lots of great programmers, reading their code just might make you a better programmer. Another great tool for discovering the capabilities of an object is dojo.debug(). Simply pass your dojo object into the debug function and inspect it using Firebug's DOM inspector. This will give you lots of details about the objects contents in a runtime environment.

Tip 6: Don't settle
Dojo is designed to have its components extended to suit your specific needs. It will not cover 100% of your use cases so extending functionality will quickly become a reality. Whatever you do, don't change the Dojo source, extending a class may take a bit longer but it ensures that when you migrate to a new version your application doesn't break down completely. When extending a Dojo class, try to use this.inherited(arguments) [call the parent classes method with all the arguments] whenever possible so as to minimize the amount of code you have to copy and paste from the Dojo method you are overwriting. As mentioned Dojo can become slow if your application becomes large and you are using a number of widgets. To overcome this you will have to extend certain Dojo widgets and create optimized versions of methods so that they don't fire any unneeded events, especially if you are performing bulk operations. If you are not using any widgets on your page then you will not encounter any performance issues.

Tip 7: Don't rush it
As programmers we have a tendency to read the first 20 pages of a technical book then hit the ground running, learning as we go. Dojo is no different than any other language, you have to use it to become good at it but I would highly recommend at least skimming an entire book on Dojo before taking on a larger project. I made the mistake of getting half way through the book, then finishing the rest two months down the road only to realize that Dojo has already provided solutions to problems I was running into.

Tip 8: Think like a real programmer
For the longest time, JavaScript was seen as a lousy language and if you were a JS programmer you weren't a real programmer. Dojo changes all this by bringing in a wide array of tools that back-end programmers have utilized for years. The chaos is gone so take the time to design your class and event structures and take pride in your code, just as you would with any other language.

If you have any other Dojo tips that you've discovered while using it, please share them in the comments.

If you liked this post please follow me on Twitter for more.

Sunday, November 7, 2010

Startups, Where is your Story?

While attending the Web 2.0 Expo in New York near the end of September I had the pleasure of visiting the startup showcase. The showcase was an exhibition of various startups that were then judged by Tim O'Reilly and Fred Wilson. Overall there were some interesting ideas, my gripe however was with the delivery of those ideas. The three companies that were invited to present in front of a wide audience had ideas with mass market potential and an ideal opportunity to promote their business to a room full of bloggers, media, and tech enthusiasts. When faced with such an opportunity I expected a well refined and compelling presentation, but instead they felt relatively flat and uninspiring.

Although delivery in two of the three cases could use a little improvement, it was the content that was lacking. Rather than giving a dry and clinical rundown of your product, tell me a story, let my imagination wander and relate to your product on a more personal level. Put me in the shoes of a person faced with the problem you are trying to solve, make their problem my problem. Now that I see the situation that your target consumer is in, pitch your solution for that specific context. With a rich and compelling story you've told me two key components of any good startup pitch, the problem and the solution, and you've done so in a way that makes me believe the problem exists. With this in the bag you can proceed to two other components which are traction and market size. Whose problems have you solved so far, how many customers do you have, and how many others have this problem. This is a great opportunity for a testimonial or interesting example of your product being used in the real world (ideally the story you just told was a true story and you can tie it back).

Let's go through a concrete example using one of the companies at the startup showcase. I will use hour.ly as their concept is straightforward and easy to understand. The premise behind the startup is a job board for part-time and temp employees. The actual details of my example are fabricated but what I intend to do is offer a quick presentation on the company's concept.

"Good evening, my name is so and so and this is my partner so and so, [in an actual vc pitch you would give some background on yourselves here], and we'd like to talk to you about a product that we're very excited about called hour.ly. Let's start off by talking about Jane (show a picture of Jane). Until two years ago Jane was making a good living as an administrative assistant at a large marketing firm. Shortly thereafter she found herself in the same place as millions of other Americans, struggling to find work in an economy plagued by high unemployment. On the other hand we have John (show a picture of John), a business executive at a mid-sized software company. John is seeing demand picking up slightly but sales of his product are still variable and fear of a double-dip recession are preventing him from hiring anymore full-time staff. He's tried to meet the sporadic demand for administrative personnel through posts on Craigslist but has found it too time consuming and difficult to gauge the quality of candidates. He's also tried staffing and temp placement companies but hates spending the high fees attached to these services. Hour.ly aims to bridge this gap by providing an online marketplace for highly qualified people just like Jane. We ensure quality by providing a sophisticated recommendation and matching system that allows John to easily find reputable candidates with the skills he's looking for at fees well bellow those of traditional staffing companies. Today Jane works 20-30 hours a week providing part-time administrative help to John and other companies like his thanks to hour.ly. We currently have 3000 candidates and 500 employers using hour.ly and have personally facilitated the placement of over 4000 part-time and temp positions. Given the permanent change in the business landscape, flexible staffing will become an ever more popular alternative and hour.ly is well positioned to service this growing market."

With this example we've covered the problem, the solution, the market size, and the traction with a compelling story. The audience is placed in the shoes of the people experiencing the problem and seeing first hand how the product solves their problem. Startup presentations needn't be dry feature demonstrations. Remember that your product services real people, people with a story and real emotions.

If you liked this post please follow me on Twitter for more.

Thursday, November 4, 2010

A Startup's Guide to Application Scaling

A common concern among startup founders and application architects is "when do I start worrying about scalability?". Is it something that you need to start worrying about right away, or can you put it off for a bit? It's surely a question that every VC will ask you during a pitch presentation so you better have a good answer right? The fact is that writing a highly scalable application takes considerably longer than one that runs off a single application server, otherwise there would be no debate, just make it scalable from the outset. The greatest problem is that as a startup you don't how successful your product will be so why spend your precious time and money with scalability when you could be releasing a product quickly and testing for market acceptance. After all, scalability only has value if someone is actually using your product. The lean startup methodology suggests that we build a minimum viable product (MVP) to test for problem-solution and product-market fit and since we include the term "minimum" it is certainly implied that we not dedicate too much time to scalability. As I've noted in my previous post, the MVP doesn't even have to be a functioning product so let's look specifically at a working prototype. Once again it would be presumptuous to assume a need for scalability at the prototype stage, but is there a middle ground that we can take? What if your MVP's validate your hypothesis and confirm a strong market need for your offering, should you spend longer developing a scalable initial product or should you get it to market ASAP? I will return to these question in a moment, but first a brief digression.

The first thing to consider is the type of application your are building, or more spacifically the revenue model for your application. Will it be a viral application where revenue is generated by ad sales from a growth in your user base, or do you follow a traditional paid subscription model? Regardless of the model your application will still need to scale at some point, but if it is of the viral sort then you may have a lot less time to react. As we know, viral applications follow a hockey stick curve and once the user base starts growing it will do so at a rapid rate. When this happens you better be able to scale quickly or at least have a very endearing fail whale to placate your users. On the flip side, since viral applications are typically free, users are a little more tolerant of downtime (unless you provide a critical service like GMAIL), while paid subscription users expect a good degree of stability for their monthly fee.

So how do you handle scalability when you don't know if you'll ever need it. The answer lies in the fundamental principles of application programming. You needn't build a distributed system from the outset but what you should do is make it as easy as possible to implement scalability when the time comes. To achieve this your programmers should adhere to the following:
  • A Model-View-Controller (MVC) architecture
  • A well defined service level (this is one of the most important elements)
  • Object oriented design patterns such as Facades, Adapters, Factories, Abstraction, and Proxies
  • Loose coupling of application components (e.g. a Mediator design pattern)
Although these principles should be followed across your entire application, they are particularly crucial for services that are resource intensive, or those that access a shared resource. Resource intensive operations become excellent candidates for distributed computing as they can be offloaded to a separate web or cloud server quite easily to regain precious computing resources needed to host the remainder of your application. Shared resources on the other hand pose a problem once you decide to scale horizontally and load balance your web servers as the resource may have to be duplicated on each server if it is not decoupled. The best example of this is user file storage in a web application. If your application stores files on the same server that hosts your application, you would have to replicate these files in real-time if you wanted to run your application off two or more servers. By using the above rules you ensure that you can easily offload file storage to another server or a cloud service such as Amazon S3. At this point all that is needed is for your factory method to return a new IO class that communicates with a web service rather than the native file system. 

Once you have a strong application architecture you have to make metrics your religion. Monitor as much as you can and plot these metrics against your user growth. Estimate how many users your current architecture can support and combine it with your growth rate to see how much time you have. As you gain more traction your proposition to investors will become that much more appealing. If they ask you how your application scales you can at least tell them that it's been designed with scalability in mind and that the funding they provide will in part go towards handling your rapidly expanding user base. Your short term solution is always a faster server, a separate database server (if you don't already have one), or clustered DB servers which are easy to setup and generally don't require many application code changes (provided you have a good data access layer).

Regardless of the steps you take in scalability, an important thing to keep in mind is that application scaling cannot happen in a silo. It should be a holistic process that involves cross-functional teams throughout your organization. You may be able to scale your application but if your support or IT staff cannot then you still have a point of failure. If marketing decides to make a major sales initiative and your application isn't ready to handle the impending inflow of customers then you've not only wasted your advertising dollars but also risk loosing existing clients. The Flipboard launch was a classic example of technical and business units being completely out of sync. If you expect Ashton Kutcher to champion your product, you had better be able to scale from the outset. For those of us who aren't so lucky, a middle ground is usually enough. It's generally best to get your product into your customers hands as quickly as possible, but don't leave scalability as a complete afterthought. Invest a bit of extra time and make provisions for it from the outset so that when it is needed you can respond quickly without having to rewrite your entire application. If nothing else you'll have a well designed and robust application that you can maintain for years to come.

If you liked this post please follow me on Twitter for more