Product at Zalo. Specialized in user’s problem.
14 stories
·
0 followers

The future of work: social, pop culture and wood stain

1 Share

This classic ad from 1994 is now part of British pop culture. ‘It does exactly what it says on the tin’ is a great catch phrase, and it’s also a great way to describe a whole class of product and a whole class of startup.

It’s really easy to explain what Rigup, Everlaw, Onshape, Figma or Frame.io are trying to do. They might not succeed (just as the wood stain might not be any good) but you know exactly what the problem is. You could say the same about WhatsApp, or perhaps even early Instagram. They’re discoveries - they found a problem and found a solution (and then executed like crazy for a decade, of course). ‘Do you need wood stain that’s quick drying - yes or no?’

On the other hand, in the last decade, as social has expanded and splintered way beyond Facebook, we’ve had an endless flow of new social experiences that aren’t utilities in that way - they’re pieces of pop culture. They have some ideas about how people feel and ideas about something that might express that, and they launch out into the internet like fireworks, or like fashions, bands or magazines. They think they see some piece of the zeitgeist and some way to express it.

That’s been clear for a while, but I think you could say the same thing about a lot of new productivity apps as well - they’re trying to capture something intangible about the way we work, collaborate, share and organise. Now that we’re all locked down, half the software engineers on earth are sitting at their computers swearing at their tools and thinking of new ways to collaborate, with video, text, voice, screen sharing, or something else again, and with synchronous or asynchronous models, or something else. But the interesting ones here aren’t just ‘video’, or ‘screen sharing’ or ‘notes’ - they’re bets on how to present that differently, and to work differently. They’re bets on psychology and on how people might feel about working that way.

That means the mental process for looking at them from the outside is different. If you’ve built a SaaS tool for X in industry Y, I can take a view on how big that problem is and how big that industry is, and on how you’ve solved it. I might well be wrong (Salesforce unlocked a vastly bigger opportunity in ‘software for sales’ than anyone suspected) but I can at least be wrong in a logical way. But it’s tough to do a TAM for a band, and you couldn’t have done one for consumer social apps like Snap or TikTok or (this week) Clubhouse. You bet on the team, or on the graph (if it’s not too early to have one), or on your feeling. I think the same applies to many new productivity ideas - they’re pop culture.



Read the whole story
jksnguyen17
1629 days ago
reply
Ho Chi Minh City, Vietnam
Share this story
Delete

Dust in the Light

2 Shares

Everyone in Madison knew to avoid Badger Road.

It was 1996, and the city was celebrating being christened the best place to live in America by Money Magazine:

Money Magazine declares Madison the best city in America

This year, Madison (and the rest of Dane County) earns the No. 1 position among the 300 biggest U.S. metropolitan areas in our 1996 Best Places to Live in America ranking. It snagged the top spot because apparently someone forgot to tell the folks in Madison that life is supposed to be full of trade-offs. The 390,300 residents of Dane County, 80 miles west of Milwaukee in south-central Wisconsin, have a vibrant economy with plentiful jobs, superb health care and a range of cultural activities usually associated with cities twice as big. Yet this mid-size metro area also offers up a low crime rate and palpable friendliness you might assume are available only in, say, Andy Griffith’s Mayberry. The news that the great Dane County is top dog this year probably won’t surprise the region’s residents. More than 90% of Madisonians rated their quality of life good or very good in a recent survey. Since the cosmopolitan Madison area — the city accounts for about half the county’s population — is surrounded by Wisconsin’s everpresent dairy farms, it seems only right to toast 1996’s No. 1 big cheese with a wedge of aged Wisconsin cheddar.

Still, despite the excellent quality of life, most everyone had, at one time or another, been made aware that the neighborhoods around South Park Street were “dangerous”; that wasn’t such a big deal, though, because no one you knew ever went there.

Madison’s Crescent

In 2016, a blogger named Lew Blank observed that the racial distribution of Madison neighborhoods formed a crescent:

When you look at Madison’s Racial Dot Map, you notice a pattern. The bottom and right sides of the map hold the majority of the black and hispanic population. It forms a curve almost – starting in the South Side, crossing along the east side of Lake Monona, and ending at the Northeast Side. I dub this curve-like chain of black and hispanic neighborhoods “The Crescent”.

Non-white neighborhoods in Madison form a crescent

This crescent was also seen in poverty indicators:

Shown below is a map of every single school in Madison with above average usage of free/reduced lunch programs:

Schools with kids in poverty are in the crescent

That’s right. 23 out of 23 schools in Madison that have above average usage of free/reduced lunch programs all fall along the Crescent.

The deal is, the children who need free/reduced lunch are poor, obviously. So does that mean that the poorest neighborhoods of Madison fall along the Crescent? Unfortunately, that’s exactly what it means.

In Madison, a black child is 13 times more likely than a white child to be born into poverty – an insanely high disparity.

Black children in Madison are much more likely to be born into poverty

So, Madison’s black and hispanic neighborhoods (the ones on the Crescent) are its poorest neighborhoods, and Madison’s white neighborhoods (the ones not on the Crescent) are its wealthiest neighborhoods.

Unsurprisingly, the crescent was also seen in educational outcomes:

It’s clear, then, that schools in the Crescent of poor black/hispanic neighborhoods would be expected to have below-average academic success. And unfortunately, the map below of all Madison schools with below-average reading proficiency rates indicates that this is exactly the case.

Schools in the crescent have worse performance

Believe it or not, 24 out of 24 schools with below-average reading proficiency rates fall along the Crescent.

And, of course, crime; I personally added the red box to Blank’s final map to indicate the Badger Road area I mentioned above:

The map below of the addresses of incarcerated Madisonians shows that incarceration in Madison tends to be clustered around the Crescent.

Incarceration rates in Madison are worse in the crescent

There was one map that Blank was missing though: the notorious Home Owners’ Loan Corporation map. The Home Owners’ Loan Corporation was a federal agency formed as part of President Franklin D. Roosevelt’s New Deal; its purpose was to refinance home mortgages, but as part of the process, the Corporation mapped out U.S. neighborhoods by risk, and by risk, HOLC all-too-frequently meant percentage of non-white people, particularly African Americans. Here was the map of Madison (laid on top of a current map in order to deliver the a north-is-up perspective):

The crescent matches Madison red-lining

Once you see the crescent, you can’t unsee it. And, relatedly, you can’t escape the impact on Madison. In 2018 African Americans made up 7% of the population but 43% of arrests and 46% of Dane County Jail inmates; African American students were 18% of the school district, but received 57% of suspensions; 10 percent of African American students received an “advanced” or “proficient” score on the math portion of Wisconsin’s standardized testing, while 61% of white students did the same (the proportions were similar for all subjects). Meanwhile the average house price in the Burr Oaks neighborhood (which includes Badger Road) is $145,300; the Madison average is $300,967.

The pattern is even worse in Milwaukee, Wisconsin’s largest city, and the most segregated city in the country; small wonder that Wisconsin ranks so highly when it comes to the disparity between black and white median household income:

Wisconsin has amongst the worst disparity between black and white median income

And poverty rates:

Wisconsin has amongst the worst disparity between black and white poverty rates

And, as the paper from which these charts are drawn puts it:

Racial disparities in rewards and returns (opportunity, compensation, security) are matched by racial disparities in punishment…Five (WI, IA, MN, IL, NE) of the ten worst-performing states, ranked by the ratio between black and white rates, are in the Midwest. Such disparities, glaring in their own right, also have profound impacts on individuals, families and communities. Incarceration short circuits equal citizenship—the right to vote, educational and employment opportunities, access to housing—in deep and lasting ways.

Wisconsin has amongst the worst disparity and black and white incarceration rates

The one state competing with Wisconsin for the highest measurements of disparity is the neighbor to the west: Minnesota.

Minneapolis and George Floyd

While red-lining helped shape segregation in many cities, Minneapolis was pre-emptive about its discrimination; beginning in the 1910s Minneapolis real estate deeds started to include “Covenants” that explicitly excluded African Americans. A team from the University of Minnesota has been researching real estate deeds to uncover these covenants, and created this striking time-lapse of their spread:

Racial covenants were ruled unconstitutional by the Supreme Court in 1948, but the effect remains; compare the racial covenant map to the racial dot map Blank referenced above — the blue (which is white people) adheres to the blue of racial covenants:

A map of racial covenants closely matches a map of Minneapolis' population

That red cross, meanwhile, is the location of the homicide of George Floyd, in the decidedly non-blue portion of the map. “Homicide” was the word used by the Hennepin County Medical Examiner, which ruled that Floyd’s cause of death was “Cardiopulmonary arrest complicating law enforcement subdual, restraint, and neck compression”; it is up to prosecutors and a jury to decide if that homicide constitutes murder.

The rest of the country did not take so long: nearly all have seen the video of Minneapolis police officer Derek Chauvin with his knee on Floyd’s neck for 8 minutes and 46 seconds, even as Floyd first complains he cannot breath, and then, for the final two minutes and 53 seconds, falls silent.

Dust in the Air

The first version of the Hennepin County Medical Examiner’s autopsy, at least the part quoted in the criminal complaint against Chauvin, read a bit differently:

The Hennepin County Medical Examiner (ME) conducted Mr. Floyd’s autopsy on May 26, 2020. The full report of the ME is pending but the ME has made the following preliminary findings. The autopsy revealed no physical findings that support a diagnosis of traumatic asphyxia or strangulation. Mr. Floyd had underlying health conditions including coronary artery disease and hypertensive heart disease. The combined effects of Mr. Floyd being restrained by the police, his underlying health conditions and any potential intoxicants in his system likely contributed to his death.

The underlying health conditions and intoxicants are still in the final report; what has changed is their relative prominence in explaining Floyd’s death. One suspects that in a different world — say, the world that was Minneapolis for most of the 20th century — said underlying health conditions and intoxicants would have been held to be the cause of death, not “Other significant conditions.” Perhaps there would be a two paragraph story in the Star Tribune on page A17, or more likely Floyd’s death would have disappeared into a police filing cabinet, a non-event as far as most of Minneapolis was concerned. At best there would be a murmur to avoid that sketchy Powderhorn neighborhood, a rarely-visited barely-remembered exception to Minneapolis’ status as one of the best cities in America.

Those who knew Floyd or witnessed his death would know better, of course. They would, as Kareem Abdul-Jabbar wrote in the Los Angeles Times, shout “Not @#$%! again!” Abdul-Jabbar explains:

African Americans have been living in a burning building for many years, choking on the smoke as the flames burn closer and closer. Racism in America is like dust in the air. It seems invisible — even if you’re choking on it — until you let the sun in. Then you see it’s everywhere. As long as we keep shining that light, we have a chance of cleaning it wherever it lands. But we have to stay vigilant, because it’s always still in the air.

What made the Floyd story different than all of the surely similar examples that went before it is the Internet, specifically the combination of cameras on smartphones and social networks. The former means any incident can be recorded on a whim; the latter means that said recording can be spread worldwide instantly. That is exactly what happened with the Floyd homicide: the initial video was captured on a smartphone and posted on Facebook, triggering a level of attention to the Floyd case that in all likelihood changed the nature of the autopsy and led to the pressing of charges against Chauvin — a chance, in Abdul-Jabbar’s words, of cleaning at least one spec of that omnipresent dust.

Trump’s Tweets

Notably, this is not why Facebook is in the news this week; yesterday hundreds of employees staged a virtual walkout to protest the fact that the company committed, in their mind, a sin of omission: not deleting posts from President Trump. Those posts are copies of Trump tweets, three of which Twitter modified in some way last week. The first two were Trump allegations that voting by mail had a high risk of fraud; Twitter attached a “Get the facts” label that led to a page disputing Trump’s claim.

The more serious intervention came early Friday morning, when Twitter obscured a Trump tweet because it, in their determination, “violated the Twitter Rules about glorifying violence.”

Twitter obscured a Trump tweet

Twitter — at least as far as citing its rules is concerned — apparently objected to the phrase “when the looting starts, the shooting starts,” which is associated with a brutal segregationist police chief from Miami; Trump claimed to not know the saying’s history, but honestly, arguing about that phrase feels like a distraction from Trump’s all-capitalization use of the descriptor “thugs”, a word with significant racial undertones. That certainly seemed to be what the protesting Facebook employees picked up on; from the New York Times:

“The hateful rhetoric advocating violence against black demonstrators by the US President does not warrant defense under the guise of freedom of expression,” one Facebook employee wrote in an internal message board, according to a copy of the text viewed by The New York Times. The employee added: “Along with Black employees in the company, and all persons with a moral conscience, I am calling for Mark to immediately take down the President’s post advocating violence, murder and imminent threat against Black people.” The Times agreed to withhold the employee’s name.

What is notable about that New York Times story is that it reproduces the post (and tweet!) that the employees want taken down:

The New York Times published the post and tweets many want banned

So did the story I linked to above about that Miami police chief, and countless other publications. Indeed, it seems rather obvious that Twitter’s action — and those objecting to Facebook’s lack of action — ensured that Trump’s tweet would be far more widely read than it might have been otherwise.

It is not clear that this is a bad thing. Trump’s tweet is abominable, but sadly, of a piece with far too many presidents. The Associated Press wrote in 2019:

Throughout American history, presidents have uttered comments, issued decisions and made public and private moves that critics said were racist, either at the time or in later generations. The presidents did so both before taking office and during their time in the White House…

This extends far beyond the founding fathers, most of whom owned slaves, to the 20th century:

The Virginia-born Woodrow Wilson worked to keep blacks out of Princeton University while serving as that school’s president…

Democrat Lyndon Johnson assumed the presidency in 1963 after the assassination of John F. Kennedy and sought to push a civil rights bill amid demonstrations by African Americans…But according to tapes of his private conversations, Johnson routinely used racist epithets to describe African Americans and some blacks he appointed to key positions.

His successor, Republican Richard Nixon, also regularly used racist epithets while in office in private conversations…As with Johnson, many of Nixon’s remarks were unknown to the general public until tapes of White House conversations were released decades later. Recently the Nixon Presidential Library released an October 1971 phone conversation between Nixon and then California Gov. Ronald Reagan, another future president…Reagan, in venting his frustration with United Nations delegates who voted against the U.S., dropped some racist language.

The part about secret tapes is notable: much of this racism — this dust in the air — was in darkened rooms, unseen by the public. Trump, if nothing else, has no need for secret tapes: we have his very public Twitter account, and all indications, particularly in terms of pre-COVID polling, suggest that it massively weakened his reelection bid.

The president’s threats, meanwhile, continue: yesterday Trump demanded governors around the country crack down on the looting that has in several cities followed peaceful protests, saying he would call in the military otherwise. That certainly seems to be, in broad strokes, in line with the tweet Twitter hid — does it matter that Trump stated his position on a conference call and in the Rose Garden instead of a tweet?

In fact, that is what is so striking about the demands that Facebook act on this particular post (beyond the extremely problematic prospect of an unaccountable figure like Zuckerberg unilaterally deciding what is and is not acceptable political speech): the preponderance of evidence suggests that these demands have nothing to do with misinformation, but rather reality. The United States really does have a president named Donald Trump who uses extremely problematic terms — in all caps! — for African Americans and quotes segregationist police chiefs, and social media, for better or worse, is ultimately a reflection of humanity. Facebook deleting Trump’s post won’t change that fact, but it will, at least for a moment, turn out the lights, hiding the dust.

A Gargantuan Force

It is hard to be optimistic about anything at this moment in time. My regular refrain from the beginning of the coronavirus crisis is that the most likely outcome will be the acceleration of trends that were already happening. That is particularly scary given what I wrote back when Stratechery started in 2013, in a post called Friction:

Count me with those who believe the Internet is on par with the industrial revolution, the full impact of which stretched over centuries. And it wasn’t all good. Like today, the industrial revolution included a period of time that saw many lose their jobs and a massive surge in inequality. It also lifted millions of others out of sustenance farming. Then again, it also propagated slavery, particularly in North America. The industrial revolution led to new monetary systems, and it created robber barons. Modern democracies sprouted from the industrial revolution, and so did fascism and communism. The quality of life of millions and millions was unimaginably improved, and millions and millions died in two unimaginably terrible wars.

Another comparison point is the printing press, which I wrote about last year in the context of Facebook:

Just as important, though, particularly in terms of the impact on society, is the drastic reduction in fixed costs. Not only can existing publishers reach anyone, anyone can become a publisher. Moreover, they don’t even need a publication: social media gives everyone the means to broadcast to the entire world. Read again Zuckerberg’s description of the Fifth Estate:

People having the power to express themselves at scale is a new kind of force in the world — a Fifth Estate alongside the other power structures of society. People no longer have to rely on traditional gatekeepers in politics or media to make their voices heard, and that has important consequences.

It is difficult to overstate how much of an understatement that is. I just recounted how the printing press effectively overthrew the First Estate, leading to the establishment of nation-states and the creation and empowerment of a new nobility. The implication of overthrowing the Second Estate, via the empowerment of commoners, is almost too radical to imagine.

And yet, look again at this past week: a century of institutionalized racism in Minneapolis was not necessarily overthrown, but certainly overwhelmed in the case of George Floyd, because of a post on Facebook. Both peaceful protests and wanton destruction and looting were likely organized on social media. Video of both were circulated around the world via ubiquitous smartphone cameras on said social networks. The Internet is an amoral force — it can effect both positive and negative outcomes — but what cannot be underestimated is how gargantuan a force it is.

To that end, while there is much to fear, there is room for hope as well. I am grateful that I can no longer unsee Madison’s crescent, thanks to a blog post. I am angered by the video of Floyd’s death, and appalled at the dust in the air that yes, I was privileged enough to avoid without a second thought. And no matter what upheaval lies ahead, I am certain that the light that illuminates that dust so brightly can never be put away. There are no more gatekeepers, oftentimes for worse, but also for better.

Read the whole story
jksnguyen17
1629 days ago
reply
Ho Chi Minh City, Vietnam
Share this story
Delete

Not even wrong: ways to predict tech

1 Share

"That is not only not right; it is not even wrong"

- Wolfgang Pauli

A lot of really important technologies started out looking like expensive, impractical toys. The engineering wasn't finished, the building blocks didn’t fit together, the volumes were too low and the manufacturing process was new and imperfect. In parallel, many or even most important things propose some new way of doing things, or even an entirely new thing to do. So it doesn’t work, it’s expensive, and it’s silly. It’s a toy.

Some of the most important things of the last 100 years or so looked like this - aircraft, cars, telephones, mobile phones and personal computers were all dismissed.

But on the other hand, plenty of things that looked like useless toys never did become anything more.

This means that there is no predictive value in saying ‘that doesn’t work’ or ‘that looks like a toy’ - and that there is also no predictive value in saying ‘people always say that’. As Pauli put it, statements like this are ‘not even wrong’ - they do not give you any insight into what will happen. You have to go one level further. You have to ask ‘do you have a theory for why this will get better, or why it won’t, and for why people will change their behaviour, or for why they won’t’?

“They laughed at Columbus and they laughed at the Wright brothers. But they also laughed at Bozo the Clown.”

- Carl Sagan

To understand both of these, it’s useful to compare the Wright Flier with the Bell Rocket Belt. Both of these were expensive impractical toys, but one of them changed the world and the other did not. And there is no hindsight bias or survivor bias here.

2880px-Wright_First_Flight_1903Dec17_(full_restore_115).jpg
bell-rocket-belt3.jpg

The Wright Flier could only go 200 meters, and the Rocket Belt could only fly for 21 seconds. But the Flier was a breakthrough of principle. There was no reason why it couldn't get much better, very quickly, and Blériot flew across the English Channel just six years later. There was a very clear and obvious path to make it better. Conversely, the Rocket Belt flew for 21 seconds because it used almost a litre of fuel per second - to fly like this for half a hour you’d need almost two tonnes of fuel, and you can’t carry that on your back. There was no roadmap to make it better without changing the laws of physics. We don’t just know that now - we knew it in 1962.

These roadmaps can come in steps. It took quite a few steps to get from the Flier to something that made ocean liners obsolete, and each of those steps were useful. The PC also came in steps - from hobbyists to spreadsheets to web browsers. The same thing for mobile - we went from expensive analogue phones for a few people to cheap GSM phones for billions of people to smartphones that changed what mobile meant. But there was always a path. The Apple 1, Netscape and the iPhone all looked like impractical toys that ‘couldn’t be used for real work’, but there were obvious roadmaps to change that - not necessarily all the way to the future, but certainly to a useful next step.

Equally, sometimes the roadmap is ‘forget about this for 20 years’. The Newton or the IBM Simon were just too early, as was the first wave of VR in the 80s and 90s. You could have said, deterministically, that Moore’s Law would make VR or pocket computers useful at some point, so there was notionally a roadmap, but the roadmap told you to work on something else. This is different to the Rocket Belt, where there was no foreseeable future development that would make it work.

And sometimes the missing piece pops into existence in unpredictable ways - I have a fascinating essay by Hiram Maxim saying that he had everything about flight working in the late 19th century except for the engine - he couldn‘t make a steam engine with the right power-to-weight ratio, and then the internal combustion engine changed all the equations. This isn’t a roadmap either - hoping that something new will come along, but you don’t know what or when, is not a plan.

Finally, sometimes you have a roadmap but discover that it runs out short of the destination. This might be what has happened to autonomous cars. The machine learning breakthrough of 2013 gave us a clear roadmap to go from AVs that didn’t work at all to AVs that work pretty well but not well enough. We have 10% left to go, but it now looks at least possible that the last 10% is 90% of the effort, and that might need something different. We might now be Hiram Maxim, waiting for an ICE.

Much the same sort of questions apply to the other side of the problem - even if this did get very cheap and very good, who would use it? You can’t do a waterfall chart of an engineering roadmap here, but you can again ask questions - what would have to change? Are you proposing a change in human nature, or a different way of expressing it? What’s your theory of why things will change or why they won’t?

Philip II of Macedon to Sparta: “You are advised to submit without further delay, for if I bring my army into your land, I will destroy your farms, slay your people, and raze your city.”

Sparta: “If”

The thread through all of this is that we don’t know what will happen, but we do know what could happen - we don’t know the answer, but we can at least ask useful questions. The key challenge to any assertion about what will happen, I think, is to ask ‘well, what would have to change?’ Could this happen, and if it did, would it work? We’re always going to be wrong sometimes, but we can try to be wrong for the right reasons. The point that Pauli was making in the quote I gave at the beginning is that a theory might be right or wrong, but first it has to rise to the level of being a theory at all. So, do you have a theory?

Read the whole story
jksnguyen17
1648 days ago
reply
Ho Chi Minh City, Vietnam
Share this story
Delete

The VR winter

1 Share

“Our vision is that VR / AR will be the next major computing platform after mobile in about 10 years. It can be even more ubiquitous than mobile - especially once we reach AR - since you can have it always on… Once you have a good VR / AR system, you no longer need to buy phones or TVs or many other physical objects - they can just become apps in a digital store.’ - Mark Zuckerberg, 2015 (Source)


We tried VR in the 1980s, and it didn’t work. The idea may have been great, but the technology of the day was nowhere close to delivering it, and almost everyone forgot about it. Then, in 2012, we realised that this might work now. Moore’s law and the smartphone component supply chain meant that the hardware to deliver the vision was mostly there on the shelf. Since then we’ve gone from the proof of concept to maybe three quarters of the way towards a really great mass-market consumer device.

The problem is, we haven’t worked out what you would do with a great VR device that isn’t a game (or some very niche industrial application), and it’s not clear that we will. We’ve had five years of experimental projects and all sorts of content has been tried, and nothing other than games has really worked.

Meanwhile, it’s instructive that now that we’re all locked up at home, video calls have become a huge consumer phenomenon, but VR has been not. This should have been a VR moment, and it isn’t.

Screenshot 2020-05-09 at 10.54.05 pm.png

Does that tell us anything? Surely if a raw experience is amazing, the applications will come with a bit more time? Well, perhaps. If you try the Oculus Quest, the experience is indeed amazing and it’s easy to think that this is part of the future. However, if you’d tried one of today’s games consoles in 1980 you’d have had the same reaction - clearly amazing and clearly part of the future. But it turned out that games consoles were a 150-200m unit installed base, not the 1.5bn of PCs, let alone the 4bn of smartphones. That’s a big business, but it’s a branch off the side of the tech industry, not its central, driving ecosystem. Most people’s experience of console games is the demo in the window of a Microsoft store in the mall - they say ‘that’s pretty’ and walk past. A long time ago a school teacher named Hammy Sparks (yes, really) blew my mind by suggesting that there can be different sized infinities - in tech, there can be different sized amazings.

Smartphones are broad and universal, whereas consoles are deep and narrow, and deep and narrow is a smaller market. VR is even deeper and even narrower, and so if we can’t work out a form of content that isn’t also deep and narrow, I think we have to assume that VR will be a subset of the games console. That would be a decent business, but it’s not why Mark Zuckerberg bought Oculus. It’s another branch off the side of tech, not the next platform after smartphones.

There’s a bunch of ideas that float around here. One is that you can’t really do apps and productivity yet because the screens aren’t high enough resolution to read text, so we can’t yet work in a 360 degree virtual sphere, and that will come. Another is that the headsets need to be even smaller and even lighter, and do pass-through so you can see the room around you. Yet another is that we just have to keep waiting, and in particular wait for a larger installed base (presumably driven by those deep-and-narrow games sales), and the innovation will somehow kick in.

There’s nothing fundamentally illogical about any of these ideas, but they do remind me a little of Karl Popper’s criticism of Marxists - that when asked why their supposedly scientific prediction hadn’t happened yet they would always say ‘ah, the historical circumstances aren’t right - you just have to wait a few more years’. There is also, of course, the tendency of Marxists to respond to being asked why communist states seem always to turn out badly by saying ‘ah, but that isn’t proper communism’. I seem to hear ‘ah, but that isn’t proper VR’ an awful lot these days.

To put this another way, it’s quite common to say that the iPhone, or PCs, or aircraft also looked primitive and useless once, but they got better, and the same will happen here. The problem with this is that the iPhone or the Wright Flier were indeed primitive and impractical, but they were breakthroughs of concept with clear paths for radical improvement. The iPhone had a bad camera, no apps and no 3G, but there was no reason why those couldn’t quickly be added. Blériot flew across the Channel just six years after the Wrights’ first powered flight. What’s the equivalent forward path here? There was an obvious roadmap for getting from a duct-taped mock-up to the Oculus Quest, and today for making the Quest even smaller and lighter, but what is the roadmap for breaking into a completely different model of consumer behaviour or consumer application? What specifically do you have to believe will change to take VR beyond games?

Poking away at this a bit further, I think there are maybe four propositions to think about.

  • Is it true that we are essentially almost there, and a bit more iteration of the hardware and the developer ecosystem will get us to a tipping point, and the S Curve will turn upwards? How would we know?

  • Are we where smartphones were before the iPhone? All of the core technology was there - we had apps and touch screens and fast data networks and so on - but we needed a change in the paradigm to package them all up in a much more accessible form. Is Oculus the new Symbian? It’s worth noting that no-one was really saying this about mobile before the iPhone - as I wrote here, the need for a new approach was only obvious in hindsight.

  • Is there a fundamental contradiction between a universally accessible experience and a device that you put on your head that shuts out the world around you and places you into an alternative reality? Is ‘VR that isn’t deep and narrow’ an oxymoron? That, after all, was the answer for games consoles. I suspect a lot of people in tech would reject this out of hand - the right VR, when we have it, must be the future, but one can’t actually take it as a given.

  • Or, by extension, is this the point - that ‘real’ VR needs some completely different device and that’s what would take it to universality? VR as HMD is narrow but VR as, say, neural lace is not?

Reading Mark’s quote above, as he talks about the merging of AR and VR, it strikes me that this and many visions for VR (cf ‘Ready Player One’) are really describing not ‘an HMD but a bit better’ but glasses, or perhaps contact lenses, or maybe even something even further into the future like neural implants. On that basis I think you could argue that even the Oculus Quest is not 3/4 of the way ‘there’ but actually still just at the beginning of the VR S Curve. The successor to the smartphone will be something that doesn’t just merge AR and VR but make the distinction irrelevant - something that you can wear all day every day, and that can seamlessly both occlude and supplement the real world and generate indistinguishable volumetric space. On that view the Oculus isn’t the iPhone - it’s the Newton, or the Apple 2, which were also far from universal, and the platonic ideal universal device is a decade or two into the future.

In turn, the trouble with this argument is that when tech people take about ‘ten years’ or ‘twenty years’, they are effectively right on the edge of science fiction - my grandfather wrote a lot of science fiction, but I try to think about the stuff we have now, and the roadmaps we have now that might tell us what we can build next. But if ‘real VR’ needs something that’s ten or twenty years away, we’re in for another VR winter.

Pulling all of these threads together, the issue I circle around is not just that we don’t have a ‘killer app’ for VR beyond games, but that we don’t know what the path to getting one might be. We can make assertions of belief from first principles - there was no killer app for the first PCs either, but they looked useful. When I started my career 3G was the hot topic, and every investor kept asking ‘what’s the killer app for 3G?’ It turned out that the killer app for having the internet in your pocket was, well, having the internet in your pocket. But with each of those, we knew what to build next, and with VR we don’t. That tells me that VR has a place in the future. It just doesn’t tell me what kind of place.

Read the whole story
jksnguyen17
1656 days ago
reply
Ho Chi Minh City, Vietnam
Share this story
Delete

Investing in Figma: The Decade of Design

1 Share

I’ve always been interested in investing in companies and founders who take an old, staid technology category and turn it on its head, often reinventing an entirely new category (and inventing new behaviors) in the process. These technology tools bring …

The post Investing in Figma: The Decade of Design appeared first on Andreessen Horowitz.



Read the whole story
jksnguyen17
1656 days ago
reply
Ho Chi Minh City, Vietnam
Share this story
Delete

The VR winter — Benedict Evans

1 Comment

Does that tell us anything? Surely if a raw experience is amazing, the applications will come with a bit more time? Well, perhaps. If you try the Oculus Quest, the experience is indeed amazing and it’s easy to think that this is part of the future. However, if you’d tried one of today’s games consoles in 1980 you’d have had the same reaction - clearly amazing and clearly part of the future. But it turned out that games consoles were a 150-200m unit installed base, not the 1.5bn of PCs, let alone the 4bn of smartphones. That’s a big business, but it’s a branch off the side of the tech industry, not its central, driving ecosystem. Most people’s experience of console games is the demo in the window of a Microsoft store in the mall - they say ‘that’s pretty’ and walk past. A long time ago a school teacher named Hammy Sparks (yes, really) blew my mind by suggesting that there can be different sized infinities - in tech, there can be different sized amazings.

Smartphones are broad and universal, whereas consoles are deep and narrow, and deep and narrow is a smaller market. VR is even deeper and even narrower, and so if we can’t work out a form of content that isn’t also deep and narrow, I think we have to assume that VR will be a subset of the games console. That would be a decent business, but it’s not why Mark Zuckerberg bought Oculus. It’s another branch off the side of tech, not the next platform after smartphones.

There’s a bunch of ideas that float around here. One is that you can’t really do apps and productivity yet because the screens aren’t high enough resolution to read text, so we can’t yet work in a 360 degree virtual sphere, and that will come. Another is that the headsets need to be even smaller and even lighter, and do pass-through so you can see the room around you. Yet another is that we just have to keep waiting, and in particular wait for a larger installed base (presumably driven by those deep-and-narrow games sales), and the innovation will somehow kick in.

There’s nothing fundamentally illogical about any of these ideas, but they do remind me a little of Karl Popper’s criticism of Marxists - that when asked why their supposedly scientific prediction hadn’t happened yet they would always say ‘ah, the historical circumstances aren’t right - you just have to wait a few more years’. There is also, of course, the tendency of Marxists to respond to being asked why communist states seem always to turn out badly by saying ‘ah, but that isn’t proper communism’. I seem to hear ‘ah, but that isn’t proper VR’ an awful lot these days.

To put this another way, it’s quite common to say that the iPhone, or PCs, or aircraft also looked primitive and useless once, but they got better, and the same will happen here. The problem with this is that the iPhone or the Wright Flier were indeed primitive and impractical, but they were breakthroughs of concept with clear paths for radical improvement. The iPhone had a bad camera, no apps and no 3G, but there was no reason why those couldn’t quickly be added. Blériot flew across the Channel just six years after the Wrights’ first powered flight. What’s the equivalent forward path here? There was an obvious roadmap for getting from a duct-taped mock-up to the Oculus Quest, and today for making the Quest even smaller and lighter, but what is the roadmap for breaking into a completely different model of consumer behaviour or consumer application? What specifically do you have to believe will change to take VR beyond games?

Poking away at this a bit further, I think there are maybe four propositions to think about.

  • Is it true that we are essentially almost there, and a bit more iteration of the hardware and the developer ecosystem will get us to a tipping point, and the S Curve will turn upwards? How would we know?

  • Are we where smartphones were before the iPhone? All of the core technology was there - we had apps and touch screens and fast data networks and so on - but we needed a change in the paradigm to package them all up in a much more accessible form. Is Oculus the new Symbian? It’s worth noting that no-one was really saying this about mobile before the iPhone - as I wrote here, the need for a new approach was only obvious in hindsight.

  • Is there a fundamental contradiction between a universally accessible experience and a device that you put on your head that shuts out the world around you and places you into an alternative reality? Is ‘VR that isn’t deep and narrow’ an oxymoron? That, after all, was the answer for games consoles. I suspect a lot of people in tech would reject this out of hand - the right VR, when we have it, must be the future, but one can’t actually take it as a given.

  • Or, by extension, is this the point - that ‘real’ VR needs some completely different device and that’s what would take it to universality? VR as HMD is narrow but VR as, say, neural lace is not?

Reading Mark’s quote above, as he talks about the merging of AR and VR, it strikes me that this and many visions for VR (cf ‘Ready Player One’) are really describing not ‘an HMD but a bit better’ but glasses, or perhaps contact lenses, or maybe even something even further into the future like neural implants. On that basis I think you could argue that even the Oculus Quest is not 3/4 of the way ‘there’ but actually still just at the beginning of the VR S Curve. The successor to the smartphone will be something that doesn’t just merge AR and VR but make the distinction irrelevant - something that you can wear all day every day, and that can seamlessly both occlude and supplement the real world and generate indistinguishable volumetric space. On that view the Oculus isn’t the iPhone - it’s the Newton, or the Apple 2, which were also far from universal, and the platonic ideal universal device is a decade or two into the future.

In turn, the trouble with this argument is that when tech people take about ‘ten years’ or ‘twenty years’, they are effectively right on the edge of science fiction - my grandfather wrote a lot of science fiction, but I try to think about the stuff we have now, and the roadmaps we have now that might tell us what we can build next. But if ‘real VR’ needs something that’s ten or twenty years away, we’re in for another VR winter.

Pulling all of these threads together, the issue I circle around is not just that we don’t have a ‘killer app’ for VR beyond games, but that we don’t know what the path to getting one might be. We can make assertions of belief from first principles - there was no killer app for the first PCs either, but they looked useful. When I started my career 3G was the hot topic, and every investor kept asking ‘what’s the killer app for 3G?’ It turned out that the killer app for having the internet in your pocket was, well, having the internet in your pocket. But with each of those, we knew what to build next, and with VR we don’t. That tells me that VR has a place in the future. It just doesn’t tell me what kind of place.

Read the whole story
jksnguyen17
1656 days ago
reply
The VR winter
https://www.ben-evans.com/benedictevans/2020/5/8/the-vr-winter
Ho Chi Minh City, Vietnam
Share this story
Delete
Next Page of Stories