Venture Capital and Start-up Companies

Technology, Politics and Capital Markets

Computer science professor and science fiction writer Vernor Vinge has written about what he refers to a technological singularity where technological acceleration is moving so rapidly that it moves in entirely unexpected directions. This is based on Vinge's observation that technology accelerates the development of technology. The design of high performance processor chips with millions of logic gates would not be possible without networks of cheap high performance computer systems to run the design, simulation and layout software. These high performance processors allow still more powerful computer systems to be designed.

What engineers and computer scientists sometimes forget is that technological evolution does not spring from information and ideas alone. Technological progress is built on a foundation of political and financial systems. These have taken hundreds of years to evolve. Social and financial systems reflect accepted thought by a broad group of people. The evolution and understanding of social and financial ideas is much slower than thinking in science and engineering.

For example, high quality scientific research depends on the free flow of information. The rise of totalitarian states has always stifled both political and scientific ideas. Even in the United States a long, and successful, fight has been waged to make cryptography research and engineering publicly available. This has enabled a variety of technologies, including secure financial transactions across computer networks.

Technology consumes capital as well as ideas and information. Liquid, transparent capital markets are critical for the development of technology. Every piece of technology that we use has moved from concept to reality because of capital expenditure. Except for technology developed by large companies like IBM, Sony and Nokia every piece of computer and networking related technology that we use exists because of the strong capital markets in the United States. These markets have provided the venture capital and, later, equity market funding, for technology companies like Intel, Sun Microsystems, and the chip design software companies Synopsys and Cadence.

The critical part capital plays in the development of technology also reflects the balance of power in the computer industry. Most start-up companies are controlled by the board members that represent venture capital. The can, and do, fire the start-up founders.

The idea that capital has most of the power is not something that engineers like much. In some cases the venture capital board members are MBAs who have not accomplished as much as the company founders. The start-up founders have the insight and the hard work to found the company, yet it is controlled by shallow capitalists. Engineers tend to discount the power and importance of capital.

Venture Capital

Commercial venture capital, where a wealthy individual provided capital for an enterprise has probably existed since the middle ages (e.g., 1400s) in industries like the British manufacture of wool cloth.

The first high risk, high reward commercial enterprise was maritime trade and piracy. In the Renaissance era maritime trade in spices, tea and coffee produced huge profits. But there were huge risks as well. Ships could be lost to weather, navigation error or piracy. Groups of wealthy merchants and investors formed syndicates that would lease a ship (if they did not already own ships), hire the Captain and finance the cargo. Fortunes were made or lost as a result of a trading voyage.

Huge profits and risks were associated with piracy as well, especially when gold started flowing from the new world to Spain and Portugal. The political and religious conflict between Spain and England provided legitimacy for British pirates. The huge profits realized from successful raids attracted venture capital, which purchased and provisioned ships in return for a share of any Spanish gold captured. Venture backed pirates included Sir Francis Drake, who also had the political backing of Elizabeth I.

Early venture capital also laid the foundation for equity markets. Ventures like British East India Company raised capital through "subscription":

In recognition of the national importance that attached to its activities and of their long-term, high-risk nature which must involve considerable overheads -- shipping, factories -- it was accepted that the Company, and the Company alone, must itself conduct all business. From this it followed that raising capital must also be on a corporate basis. And thus, as the directors put it, "the trade of the Indias being so far remote from hence [it] cannot be traded but as a joint and united stock." Theoretically this opened the Company's membership to any who were willing to subscribe and indeed, initially, subscription was the commonest avenue of induction into the Company.

The Honorable Company: A History of the English East India Company, John Keay, Macmillian, 1991

Wealth and energy have always been directly related. Humans and animals powered the world until the end of eighteenth century, when the steam engine produced the first industrial revolution in human history. The explosion of technology produced vast wealth. Deep mining of gold, coal and diamonds was made possible by steam driven air and water pumps. Blast furnaces driven by steam engines allowed steel production on a level that was never before possible. Industries fed by the power of the steam engine had an appetite for capital like the steam engine had for coal. Capital markets evolved along with the new technology.

The steam engine also introduced the era of the inventor, when a technological innovation could produce wealth. Inventors had dreams and, they hoped, a profitable innovation. The problem was funding. In the nineteenth century and the first half of the twentieth century venture capital came from three sources:

  1. Wealthy individuals
  2. Banks
  3. Existing enterprises

These three sources of funding are all conservative and tend to provide funding for proven ventures (e.g., mining and manufacturing). For example, in the 1930s and 1940s none of these sources would provide funding for Chester Carlson, the inventor of Xerography. His initial funding was provided by the Battelle Memorial Institute, which is a research institute originally established to fund innovation in metallurgy.

Modern venture funds did not start to appear until the 1960s. One of the first was Davis and Rock, founded by Arthur Rock, who was a lead investor in Intel. Ironically, Rock was also instrumental in arranging funding for Fairchild Semiconductor, which spun off Intel, National Semiconductor and Advanced Micro Devices.

The huge returns from successful venture capital funding encouraged the formation of new venture funding groups. The wealth created in the computer industry was also plowed back into new companies. Eugene Kleiner, one of the founders of Fairchild Semiconductor, later became a founding partner of Kleiner Perkins, one of the pioneering venture firms in Silicon Valley. In a bit of historical irony, Robert Noyce, one of the founders of Intel, provided funding for Intel's arch competitor, AMD.

Venture capital investment has allowed people who became wealthy in the computer industry to not only increase their wealth but to remain a part of the fast moving industry. Paul Allen, who founded Microsoft along with Bill Gates, invests in start-up companies through his Vulcan Ventures. Anne Winblad, who made a small fortune through the sale of her software company, founded Hummer Winblad, a venture capital firm that invests in software companies.

Venture capital investing is no long limited to Silicon Valley insiders. Venrock Associates, founded by the Rockefeller family, has invested in a number of technology companies, including Intel and Apple. Many pension funds and University endowments invest a portion of their holding in venture capital. For example, Harvard University invested $1 million in MasPar Computer Corp.

Constant Revolution

Technological innovation produces constant revolution. Large pools of venture capital mean that ideas can be transformed into products more rapidly. But it also means that existing companies that are not fast moving will be left behind and ultimately destroyed. For engineers this means that you have a chance at becoming wealthy through stock options or unemployed when your employer goes extinct.

Large technology companies like IBM and AT&T at one time offered their employees stability and lifetime employment. At the start of the 1980s both of these companies appeared unassailable. IBM set the standards that the rest of the computer industry followed. The computer industry was referred to as IBM and the BUNCH, where BUNCH stood for Burroughs, Univac, NCR, CDC and Honeywell. By the end of the decade this had changed. IBM shed tends of thousands of employees through layoffs or forced early retirement. The company that set industry standards was now viewed as a "mainframe dinosaur". The companies that formed the "BUNCH" were either gone or shadows of what they had been.

One of the companies that rose while IBM declined was Digital Equipment Corp. (DEC), which at its zenith was the second largest computer company in the United States. In the early 1980s The DEC VAX computer system provided computing power at a fraction of the cost of IBM systems. DEC started selling PDP 11 and later VAX systems to scientists and engineers. But with the VAX they started making big inroads into commercial computing. Where IBM was viewed as a dinosaur, DEC was seen as a cutting edge computer company and one of the most desirable places for an young engineer to work.

In 1989 Eugene Brooks, who was then working at the Lawrence Livermore Labs., wrote a paper he titled Attack of the Killer Micros, which described the impact of the exponentially increasing power of the microprocessor on the computer industry. This impact had already been felt by DEC. The VAX and later microVAX computer systems could not compete with computer systems running UNIX from Sun Microsystems and with personal computers. DECs sales declined and never recovered. DEC was eventually purchased by Compaq, a company that built Intel based personal computers.

The revolutionary effect of technology is accelerating. Silicon Graphics Inc. (SGI) was founded by James Clark in 1982. At the time Clark was a Stanford University professor. Like the DEC VAX, the Silicon Graphics workstation was coveted by engineers and scientists. SGI systems were used for visualizing large amounts of data in physics and for graphics rendering in movies like Jurassic Park. In 1996, at the apex of SGI's success, SGI founded their Silicon Studios subsidiary. Three years later SGI was a struggling company, whose profits had turned to losses. The increasing performance of the Intel processors used in personal computers and high performance graphics chips from companies like S3 and, later, 3Dfx, and Nvida, put SGI into a decline from which they may never recover from.

Eras of Silicon Valley

Silicon Valley got its name from semiconductor companies. The first of these was Shockley Semiconductor, founded by William Shockley. Gordon Moore and Robert Noyce quit Shockley to found Fairchild and, later, Intel. In terms of numbers of companies, semiconductor companies are in the minority in Silicon Valley. However, Silicon Valley still has some of the best VLSI designers in the world. Hewlett-Packard was founded in their era as well, their products made possible by the early semiconductor chips.

The next wave of companies were software companies, like Oracle or companies whose products were made possible by the microprocessor, like Apple and Sun. This was also the era of the mini-computer, which was faster than microprocessor based system. Mini-computers were built from medium scale integrated circuits (MSI), fabricated on large printed circuit boards.

The most recent wave of companies have been networking and Internet start-ups. Some companies, like Cisco and 3Com were founded in the microprocessor era and, in the case of Cisco, grew explosively during the Internet boom.

There is another era: the post Internet boom era, which is unfolding now.

The Era of Mini-supercomputers

You see a lot of people who eventually will say, "I want to make a lot of money." Well, that isn't a good way to start a company. A better way to start a company is with the idea that you're going to make some products that are beneficial and you really want to build an organization. Build something, not just make a lot of money. You're going to hire a lot of people, advance the economy, and make life easier for people.

Arthur Rock, quoted in Champions of Silicon Valley; Visionary Thinking from Today's Technology Pioneers, Charles G. Sigismund, New York, John Wiley & Sons, Inc. 2000

Until the tulip times of the Internet boom, the objective of start-up company founders and the venture capitalists that funded them was to build a company that would last and grow. A company had to have a history of consecutive profitable quarters and the promise of continued profit growth before it could be taken public. Although computer industry growth was impressive by the standards of the steel industry in the late twentieth century, it took years for companies like Oracle, Sun and Hewlett-Packard to reach billion dollar stock market valuations.

The founders of the mini-supercomputer companies in the late 1980s where cast from the mold of company builders. Many of them came from companies like DEC, Data General, IBM and HP and their vision was to build a computer company that would last and steadily increase in market value.

The idea behind mini-supercomputers was to provide cheap access to high performance numeric computing resources. At the time the supercomputer market was growing. Companies like Ford and General Motors were buying supercomputers for crash simulation and other numeric computation. Cray Research was doing well and it appeared that this section of the market was growing.

The mini-supercomputer companies Alliant and Convex were founded in 1982 and started shipping systems in 1985. Both the Alliant and Convex systems were vector register based systems. Convex was founded by, by, among others, Steve Wallach, who was one of the key engineers that worked on the Data General mini-computer described in The Soul of a New Machine by Tracy Kidder. Both the Convex and the Alliant systems were fabricated from CMOS gate arrays and could operate in an air-conditioned computer center with out special cooling. In contrast, the Cray systems of the era were cooled by circulating freon.

The performance of the mini-supercomputers was 40-60 MFLOPS, in contrast to 100 MFLOPS for a Cray supercomputer. The cost of the mini-supers was less than 1/4 the cost of a Cray (pricing is hard to quantify, mini-supercompter companies aggressively pursued business and the published price was frequently not the purchase price).

The Second Generation of Mini-Supercomputers

A second generation of mini-supercomputer companies was founded in the late 1980s. These included Cydrome, Multiflow, Stardent (which was formed from two supercomputer workstation companies, Stellar and Ardent), Thinking Machines and MasPar Computer Corp.

With the exception of the Stardent system, which was a desktop supercomputer graphic workstation, the second generation used advanced parallel architectures to achieve performance.

Cydrome and Multiflow were based on architectures that could execute more than one instruction in a particular clock cycle. These architectures are called Very Long Instruction Word (VLIW) computers, since multiple instructions are packed into a single long instruction word. The compilers for these systems would attempt to vectorize numeric computation into as many as five parallel instructions.

Neither Multiflow or Cydrome were particularly successful. Cydrome failed before they ever shipped their first system. Cydrome failed, in part, because of lack of experience among their investors. Cydrome was backed by individual investors that did not have a lot of experience with high-tech product development. When Cydrome was late with their product, they shut down the company.

Multiflow was more successful and shipped their TRACE computer system. However, they failed a few years after shipping their first system (in 1990), after consuming $65M of venture capital.

The VLIW technology developed at Cydrome and Multiflow had a great influence on the direction of computer architecture. Josh Fisher, of Multiflow and Rob Rao of Cydrome later went to work for H-P where they were key contributors to the H-P/Intel IA-64 architecture.

John S. O'Donnell, who was Vice President of Engineering at Multiflow went on to found Equator Technology, which makes a VLSI microprocessor for DSP and other media applications. Like the Multiflow system, the Equator processor is programmed with a trace scheduling compiler (e.g., compilation uses two phases: one which gathers an execution trace and a second that actually generates the code).

Head-to-head with Cray: Massively Parallel Processing

Both Thinking Machines and MasPar designed and sold massively parallel SIMD computer systems with as many as 32K and 16K processors, respectively. These systems were designed to deliver performance in the range of the Cray XMP.

SIMD is an acronym for Single Instruction, Multiple Data. In a SIMD system an array consisting of thousands of processors either executes the same instruction or sits out that instruction cycle. For example, two matrices can be added together in a single instruction cycle by storing the matrix pairs to be added on different processors. The add instruction will be executed by all processors containing the matrix data, producing the result. For some applications (e.g., fluid flow simulation) SIMD parallel processors exhibit very high performance. However, there are other applications (VLSI circuit simulation) that do not fit these architectures well.

Thinking Machines Corp.

00:31:04 why it's so rejuvenative for Danny Hillis to work at Disney Imagineering

As a kid, promising a trip to Disneyland was the way his parents got him to make his bed, mop the floors, and mow the lawn. He earned merit points which could be spent on rides. But his most memorable visit was in 1984; he needed to write his PhD thesis about the Connection Machine in order to satisfy graduation requirements. He was living in a Boston loft with six other people, and had recently founded Thinking Machines, and his life was crowded and noisy. So one day he went to Logan airport and got on the first plane, which happened to be going to Orlando. From a resort hotel the next morning, he walked into Disney World, sat on a slatted-wood bench in front of the Cinderella Castle, and began working.

From WIRED Magazine, The Long Now by Po Bronson, May 1998

Thinking Machines Corp. (TMC) was founded in 1983 by Danny Hillis and others, in Cambridge, Mass. Supposedly the Thinking Machines architecture was based on Hillis' PhD thesis. Hillis' thesis lacks detail, perhaps the result of its being written on a park bench in Disney World. The observation in Hillis' thesis that may have launched Thinking Machines is that most of the silicon in a computer system is given over to memory. Only a tiny fraction of this memory is used when data is fetched for a computation. If a block of memory was coupled with a processing element, most of the memory could be active during computation. This would result in computation that not only used silicon resources more efficiently, but would be much faster than scalar computation.

This idea was not without precedent. Goodyear Aerospace build a massively parallel processor that was used for signal processing. VLSI content addressable memories had been fabricated by several groups.

Omni: Why did you name your company Thinking Machines?

Hillis: We wanted a dream we weren't going to outgrow. Building a thinking machine has always been a personal dream of mine, and my conception of the Connection Machine was part of that. I like to say I want to make a computer that will be proud of me.

This interview is allegedly from Omni Magazine. I found this via a Google, search here. However, I have been unable to verify the Omni Magazine issue date or the correctness of this interview transcript. However, the quote from Hillis is famous.

The original application area envisioned for Thinking Machine's Connection Machine was artificial intelligence and symbolic computation. The first system, the CM-1, had no hardware support for floating point computation. The second generation system, the CM-2, contained hardware floating point support, which was separate from the processors. There was a bit-serial path between the processors and the floating point unit. This limited floating point throughput.

Both the CM-1 and the CM-2 were fabricated with CMOS gate arrays. There were rumors that TMC attempted a full custom VLSI implementation, but failed.

The final generation Connection Machine, the CM-5, was not a pure SIMD processor. It resembled the Cray T-3D, in that it could perform both SIMD or MIMD computation. The CM-5 was based on the SPARC microprocessor.

Thinking Machines consumed about $100 Million from venture capital firms and wealthy individuals before it stopped manufacturing hardware. They shipped around 100 systems. The company that remained concentrated on data mining applications that Thinking Machines developed originally for their massively parallel platform.

MasPar Computer Corp.

MasPar Computer Corp. was founded in 1988. The corporate objective was to produce a computer system in the price range of a mini-supercomptuer (e.g., $100K to $500K) that could deliver supercomputer levels of performance (e.g, the performance of a Cray XMP).

MasPar was founded by Jeff Kalb. Jeff had a extensive background in semiconductors. There was a joke that the JK flip-flop was named after Jeff Kalb. Jeff worked for Charles Sporck, the founder of National Semiconductor and later for DEC, where he was the Vice President in charge of the VLSI manufacturing. It was Jeff's group designed and built the microVAX chips, which formed the core of DEC's post VAX mini-computer business.

Some of the engineers at DEC were doing research on massively parallel processors. Although Jeff believed that this was a promising area for future development, DEC did not pursue it. In 1987 Jeff left DEC and moved to Los Altos. While working at the venture capital firm Kleiner-Perkins, Jeff wrote the business plan for MasPar. In 1988 MasPar close a first funding round of $8 million. The lead investor was Kleiner-Perkins and Floyd Kvamme, who is now Dubya's (George II) technology adviser, became a member of the board of directors.

Jeff was a company builder. He believed that the availability of cheap high performance numeric computing would enable new applications and that massively parallel supercomputers would spread like mini-computers did. MasPar would become a growing thriving company, like DEC during its glory days.

I was employee number 16 at MasPar and the first member of the computer group. MasPar hired Compass in Massachusetts, to provide Fortran compiler technology. Compass also did much of the work on the Thinking Machines Fortran compiler and TMC was surprised to find that they did not have an exclusive license to this software.

MasPar's original offices were in Santa Clara, CA. When I interviewed for a position in the compiler group there were only a few cubicles that were occupied, but there was a large pile of crated DEC workstations. One of the first questions I asked is "why DEC", since Sun systems were better. It turned out that DEC had provided $1 million of hardware credits to purchase DEC engineering workstations. Jeff called up Ken Olsen, DEC's CEO, and told him that he was going to use Sun Workstations. To avoid the embarrassment of the developer of the microVAX using a Sun system, they provide the hardware credits. At the time it was hard to argue with free hardware. But it turned out that accepting free DEC hardware as costly in the long run, since it was difficult to interface the DEC internal bus with the MasPar system. The DEC system was also unpopular with the user base.

The beta hardware for the MP-1 was completed within three months of schedule and all beta systems were purchased by customers. MasPar survived the 1989 earthquake (our power never went out and our computer network remained up), the administration of George I and the Gulf War. Two years after the MP-1 was shipped, MasPar started shipping the MP-2, which was a faster and more reliable system. The MasPar systems delivered performance for some applications that was in line with the Cray XMP. MasPar also completed a RAID based high performance disk array to provide high performance parallel I/O into the computer system. MasPar systems were purchased by a number of supercomputer centers and corporations like Ford and American Express. From the time MasPar shipped the first MP-1 to the the time they stopped manufacturing hardware, MasPar shipped over 200 systems.

Eugene Brooks consulted for MasPar, advising us on porting numeric applications to MasPar's massively parallel architecture. MasPar stocked the company kitchen with soft drinks, ships and salsa. I remember sitting at a table eating chips and having a friendly argument with Eugene about his Attack of the Killer Micros paper. The thrust of this paper was that supercomputer companies were doomed to fall before the increasing power of the microprocessor. I argued that Eugene was wrong. The same technology that was powering the increase in microprocessor speed could be harnessed to build ever faster massively parallel processors.

Perhaps my argument was correct from a purely technological point of view. But my argument ignored a number of issues. If a scalar processor, perhaps paired with a special purpose processor like a graphics engine, could deliver the performance needed for an application, the user would not go to the considerably effort to port the application to a parallel platform. The cost of high performance computing also dropped dramatically. By the standards of 1990, you can order a supercomputer from Dell Computer's Web page. There will always be a place for supercomputers, but the number of applications that cannot be solved on a microprocessor based system has shrunk.

Five years after I joined MasPar, in late 1992, the company was in trouble. Jeff Kalb was forced out as CEO and a new CEO was hired. There was a second round of layoffs, resulting in a staff reduction of 25%, where I also lost my job. At this point MasPar had consumed about $40 million in venture capital. MasPar had two non-consecutive profitable quarters, but sales were in a downturn. The MasPar layoff happened during one of Silicon Valley's worst downturns, so I felt fortunate to have several job offers soon after the layoff. MasPar was also very generous in their severance package, especially for long term employees.

Every supercomputing company that I've mentioned either failed or was bought out. Despite this, supercomputing is not dead, but the hopes for the market never materialized, falling under the assault of the "Killer Micros".

IBM makes a line of parallel supercomputers and cluster supercomputers are used in a number of applications. Cray Research, the company founded by the great supercomputer architect Seymour Cray, was purchased by Silicon Graphics, only to be spun off a few years later to Tera Computer. Tera Computer was founded by Burton Smith, the designer of the HEP computer system. Tera was backed by the US government and I suspect that the purchase of Cray had the hidden hand of government agencies behind it.

Tera Computer was founded to produce a second generation version of the HEP, now called the Multi-Threaded Architecture (MTA). Ten years were spent designing and developing this system. As far as I've been able to tell, only two Tera MTA systems have been purchased: one by the San Diego Supercomputer Center and one by Logicon, under a Naval Research Lab. contract. Having never produced a successful computer, Tera changed their corporate name to Cray. Almost all of Tera's revenue stream comes from computer systems developed by Cray.

An excellent source of information on the supercomputer era can be found in the Survey of High Performance Computing Systems

Tales from the Internet Bubble

Although much is made of the MOSAIC browser software and HTML, popular use of the Internet was made possible by cheap 12K baud modems, which began to appear around 1994. One year later, on August 5, 1995, Netscape ignited the Internet boom which fueled insane stock market valuations for Internet companies.

Designing and building a computer system takes two to three years. Even with VLSI design, where most of the computer logic can be packaged in a few chips, the boards, power supply, cooling and enclosure have to be designed and manufactured. It took Compaq a decade to build a billion dollar valuation. Netscape built such a valuation in one year.

The Internet boom attracted huge pools of venture capital. Much of this went to Internet companies, but money also flowed to a variety of start-ups.

In retrospect it is staggering to think about the amount of money that was spent. AOL purchased Netscape for $4.3 billion dollars and got nothing that I've been able to decern. Even more staggering sums were spent on building Internet infrastructure. $650 billion in equity, bonds and bank loans was spent on fiber optic network infrastructure and on companies like Covad that provide DSL. Billions more were invested in network router companies like Juniper Networks.

The normal rules of business seemed to have been suspended. I worked for Synopsys, a company that makes VLSI design software, which was down the street from Netscape's galactic world headquarters. As the weeks went by I saw Netscape take over more and more buildings. Netscape's growth was explosive. Yet they were giving away their software and appeared to be a company without a solid business plan or even a grasp of reality. At one point they talked of displacing Microsoft. Yet obviously a browser could not run without the services of an operating system (disk access, graphics display, virtual memory, modem I/O etc...) I declined the opportunity to interview at Netscape because I was sure that they could never survive.

Instead of going to work for Netscape, I went to work for Quickturn, on VLSI design simulation software. After a year or so one of the people I worked with left Quickturn to work for a now largely forgotten company named Junglee. Junglee was formed by some Stanford graduate students. After about a year Amazon purchased Junglee for 1.5 million shares of Amazon stock. Junglee developed a "shopping agent" that allowed the Web to be searched for price comparisons. This technology does not appear to have ever been used by Amazon. My colleague made somewhere between $100,000 and $200,000 on his stock options.

The funding frenzy that resulted from the streams of money flowing into venture capital funds loosened the usual flinty standards used to decide if a company would get money. In many cases it seemed that all one needed was a talent for promotion and a slick business plan. Compared to the technology that was developed in the EDA industry for chip design, the technology developed by many of the Internet companies was childish. Yet there was a constant stream of articles in the San Jose Mercury about companies that had been around a year or two making fortunes selling out to greater fools, either corporate buyers or that greater fool of last resort in the Internet era, the equity investor.

There were companies, especially in the Internet business-to-business and networking sectors that had very smart people who developed impressive technology. The founders to these companies were hoping to build the keystones of an industry. But the culture of "take the money and run" started to become pervasive. Many people actually believed that companies like pets.com or amazon.com were actually worth the huge valuations given to them. The end of this era of Silicon Valley history is well known and has been written about extensively elsewhere.

What are those employee stock options worth?

There are three reasons to work for a start-up company:

  1. You get to work on interesting leading edge projects.
  2. You get to work with smart, motivated and interesting people.
  3. You might make some real money on your stock options.

There are two related reasons to think twice before joining a start-up:

  1. The start-up may fail.
  2. You may become unemployed and have a hard time finding a new job.

Some people discount the last point. In Silicon Valley, during the good years, engineers can usually find a new job in a month or two. But during the downturn in 1992 and during the current technology bust, even good engineers with lots of experience can have a hard time. Age discrimination is a fact of life in Silicon Valley and older engineers (anyone over 35) may have an especially difficult time.

Start-up companies give their employees stock options for a variety of reasons. The central reason is to attract bright, talented and experienced engineers to a venture that might fail and leave them unemployed. The issue of failure is one that is not discussed much in Silicon Valley mythology. The other reason that start-up companies award stock options is to compensate their staff for long hours and stress.

Somewhere between nine out of ten and four out of five venture funded start-up companies do not make it to an IPO. Those that don't go public are bought out by other companies or go bankrupt.

Working for a failing start-up is not much fun. To have a chance at succeeding, a start up must have employees who belive in the company and in its future success. When a start-up fails, those dreams crash as well. When MasPar was on its downhill slide I felt that no matter how hard I worked, I could do nothing to improve the company fortunes. This is a quick way to burnout and by the time I left MasPar I was feeling pretty burned out.

So one answer to "what are those stock options worth" may be nothing. The company may fail. At one time I owned (yes, I actually purchased the options) 40,000 shares of MasPar stock. After the first round was funded, this was about 0.5% of the issued shares. If the company had gone public at around $10/share (which was how people thought of IPO valuation before the Internet bubble), my stock would have been worth around $400K. The plan was to go public about four years after the company was founded. Although MasPar never officially went bankrupt, the reverse splits at the end reduced my block of stock to 4 shares (e.g., a 10,000 to 1 reverse split). MasPar's data mining software was purchased by Accrue Software. Accrue was a public company whose stock got hammered in the Internet crash. Accrue Software subsequently went bankrupt, making my 4 shares worthless. Accrue's assets were purchased by a company called Datanautics.

Although many start-up companies fail, some also succeed. To understand what an Engineer's options might be worth if the company goes public or is bought out the stock ownership structure of a start-up must be considered.

The venture capitalists provide the funding for the start-up. In return the first round of venture capital usually takes around 60% of the company stock. Of the 40% that is allocated for employees. At least half (20%) is usually reserved for future employees (e.g., engineers, VP of marketing, VP of sales). A serious startup company will have a CEO, a CFO/VP Operations, and at least one VP of engineering, plus one or two senior engineers. In many cases these are people who joined the company before it had funding and may have worked for a time without salary. This founding group will split the remaining 20%, with the CEO getting around 5% and the rest of the founding group getting between 2% and 4%.

If you are a senior engineer with skills that the start-up really needs and they are having a hard time hiring people like you, the start-up may offer you around 1%. Most engineers are offered stock options that accounts for 0.5% or less of the outstanding first round stock.

Before the Internet madness, the classic model for a venture funded company was three funding rounds: an initial round to build the basic product, a follow on round to provide capital for building the product, and funding marketing and sales. A final third round would fund a second generation product and take the company to an IPO. Each round dilutes the percentage held by employees as a percentage of the total number of shares outstanding.

Rarely is there any objective method for valuing a start-up company that consists of some potential technology and a group of founders. A start-up that has a number of venture capital firms interested can get a higher valuation. Finally the lead investor and the start-up agree to an initial valuation and the lead investor takes part in recruiting the rest of the funding round. Lets assume that a valuation of $8 million is agreed on at a share price of $0.20/share. The means that there are 40 million shares in the company, of which 24 million are allocated for the VCs and 16 million are allocated for the employees. If you're one of the highly valued engineers with 1% this will give you 400,000 shares.

The table below shows a very successful start-up (outside of the tulip years of the Internet boom). The company has hit all of their development milestones and the beta products are well received. The products sell well and the company has produced a second generation product. In such an optimistic scenario the stock valuation increases by slightly over a factor of two with each funding round.

round funding Total shares Employee
Percentage
Valuation
1. $8,000,000 40,000,000 40% $0.20
2. $12,000,000 66,000,000 25% $0.50
3. $20,000,000 84,181,818 19% $1.10

When the company goes to an IPO the stock is valued at $2.0/share. The underwriters want to bring the company public at a price of $10/share, so there is a reverse stock split to adjust the number of outstanding shares. Your share in the company is worth $800,000. This is a lot of money, but it's certainly not enough to buy a Larry Ellison style maxi-yacht.

The scenario above is very optimistic. If the company valuation does not increase as much with each round, more stock will have to be sold to raise money and the final per share valuation when the company goes public will not be as much.

If the company is bought out, the valuation may be considerably worse. For example, if the company is bought out for $0.50/share, your stake would only be worth $200,000. In Silicon Valley this is enough for a down payment on a house. Again, this is based on 1% of the company and most engineers will not have a stake this big. If, instead, you have 0.5% you might have made more on stock options at a big company, where there is less risk and better working hours.

The main lesson to learn from this is that unless you are a company founder or a VP level employee, there is a good chance that you will work long hours for years, live with a lot of stress and not make that much money. Unless the company is a huge "home run" like Microsoft, Oracle, Sun Microsystems or Cisco, only the founders and the VCs make any real money.

Red Flags to Watch Out For

MasPar Epilog, 2005

In the discussion above I ascribe the death of MasPar to the "Attack of the Killer Micros". This is true, but it misses an important point. In theory MasPar could have ridden the wave of increasing microprocessor power as well. The ability to increase chip clock frequency and the amount of logic on a chip could have been leveraged to produce proportionally powerful MasPar supercomputers.

What actually killed MasPar was that there was a limited market for supercomputers. The amount of money that would have been required to chase the "Moore's Law" curve of increasing processor power could not be justified by the size of the supercomputer market. The third generation MasPar system was never funded and never built.

When MasPar's supercomputer line died they went into datamining software, which they had been doing for MasPar MPP applications (for example, American Express was a customer at one time). The company was renamed NeoVista software. Someone who started out as a junior programmer at MasPar, Jon Becher, eventually became CEO of NeoVista.

NeoVista was then bought by a company called Accrue Software. I believe that the transaction was an all stock transaction, or at least it was for those who held the class of stock I had. I actually had four shares of Accrue software (that would be a 10,000 to 1 reverse split for me). Accrue Software went public during the dot-com madness. At one time my shares were worth something like $75. But to get E*Trade to sell them they would have had to remove the "restricted stock" designation, which E*Trade charged $70 for.

Accure burned through a mind-boggling amount of money and then died in the great dot-com dieoff:

Eight Web analytics software companies died last year, and Accrue Software is battling to avoid a similar fate. Accrue replaced most of its senior management and has worked for much of the past year to slow its cash burn-rate. It suffered a $211 million loss in 2001, and slashed staff. In May, the company's stock was moved from the main Nasdaq trading board to the Nasdaq Small Cap Market.

Jon Becher became CEO of what was left of Accrue Software. Accrue does not seem to be trading anymore even as a "pink-sheet" stock.

[Accrue Chief Technology Officer John] D'Albis has been with Accrue since 2000, when he became chief architect as a result of Accrue's acquisition of Pilot Software. He was named Accrue's CTO in October 2001, but in June was reappointed as CTO of Pilot Software, a newly formed unit that will once again develop and market the Pilot Software suite of business intelligence offerings.

Apparently Pilot Software rose from the dead and lives on, although it is a privately held company. Jon Becher is now CEO of Pilot Software and D'Albis is the CTO. Any value for MasPar has been entirely wiped out. The investors at all stages experienced large losses.

Additional Reading

Ian Kaplan, May 2001
Revised: September, 2005