Connect with us


Bing around and find out



Microsoft’s new and improved Bing, powered by a custom version of OpenAI’s ChatGPT, has experienced a dizzyingly quick reversal: from “next big thing” to “brand-sinking albatross” in under a week. And, well, it’s all Microsoft’s fault.

ChatGPT is a really interesting demonstration of a new and unfamiliar technology that’s also fun to use. So it’s not surprising that, like every other AI-adjacent construct that comes down the line, this novelty would cause its capabilities to be overestimated by everyone from high-powered tech types to people normally uninterested in the space.

It’s at the right “tech readiness level” for discussion over tea or a beer: what are the merits and risks of generative AI’s take on art, literature, or philosophy? How can we be sure what it is original, imitative, hallucinated? What are the implications for creators, coders, customer service reps? Finally, after two years of crypto, something interesting to talk about!

The hype seems outsized partly because it is a technology more or less designed to provoke discussion, and partly because it borrows from the controversy common to all AI advances. It’s almost like “The Dress” in that it commands a response, and that response generates further responses. The hype is itself, in a way, generated.

Beyond mere discussion, large language models like ChatGPT are also well suited to low stakes experiments, for instance never-ending Mario. In fact, that’s really OpenAI’s fundamental approach to development: release models first privately to buff the sharpest edges off of, then publicly to see how they respond to a million people kicking the tires simultaneously. At some point, people give you money.

Nothing to gain, nothing to lose

What’s important about this approach is that “failure” has no real negative consequences, only positive ones. By characterizing its models as experimental, even academic in nature, any participation or engagement with the GPT series of models is simply large scale testing.

If someone builds something cool, it reinforces the idea that these models are promising; if someone finds a prominent fail state, well, what else did you expect from an experimental AI in the wild? It sinks into obscurity. Nothing is unexpected if everything is — the miracle is that the model performs as well as it does, so we are perpetually pleased and never disappointed.

In this way OpenAI has harvested an astonishing volume of proprietary test data with which to refine its models. Millions of people poking and prodding at GPT-2, GPT-3, ChatGPT, DALL-E, and DALL-E 2 (among others) have produced detailed maps of their capabilities, shortcomings, and of course popular use cases.

But it only works because the stakes are low. It’s similar to how we perceive the progress of robotics: amazed when a robot does a backflip, unbothered when it falls over trying to open a drawer. If it was dropping test vials in a hospital we would not be so charitable. Or, for that matter, if OpenAI had loudly made claims about the safety and advanced capabilities of the models, though fortunately they didn’t.

Enter Microsoft. (And Google, for that matter, but Google merely rushed the play while Microsoft is diligently pursuing an own goal.)

Microsoft made a big mistake. A Bing mistake, in fact.

Its big announcement last week lost no time in making claims about how it had worked to make its custom BingGPT (not what they called it, but we’ll use it as a disambiguation in the absence of sensible official names) safer, smarter, and more capable. In fact it had a whole special wrapper system it called Prometheus that supposedly mitigated the possibility of inappropriate responses.

Unfortunately, as anyone familiar with hubris and Greek myth could have predicted, we seem to have skipped straight to the part where Prometheus endlessly and very publicly has his liver torn out.

Oops, AI did it again

Image Credits: Microsoft/OpenAI

In the first place, Microsoft made a strategic error in tying its brand too closely to OpenAI’s. As an investor and interested party in the research the outfit is conducting, it was at a remove and blameless for any shenanigans GPT gets up to. But someone made the harebrained decision to go all-in with Microsoft’s already somewhat risible Bing branding, converting the conversational AI’s worst tendencies from curiosity to liability.

As a research program, much can be forgiven ChatGPT. As a product, however, with claims on the box like how it can help you write a report, plan a trip, or summarize recent news, few would have trusted it before and no one will now. Even what must have been the best case scenarios published by Microsoft in its own presentation of the new Bing were riddled with errors.

Those errors will not be attributed to OpenAI or ChatGPT. Because of Microsoft’s decision to own the messaging, branding, and interface, everything that goes wrong will be a Bing problem. And it is Microsoft’s further misfortune that its perennially outgunned search engine will now be like the barnyard indiscretion of the guy in the old joke — “I built that wall, do they call me Bing the bricklayer? No, they don’t.” One failure means eternal skepticism.

One trip upstate bungled means no one will ever trust Bing to plan their vacation. One misleading (or defensive) summary of a news article means no one will trust that it can be objective. One repetition of vaccine disinformation means no one will trust it to know what’s real or fake.

Prompt and response to Bing’s new conversational search.

And since Microsoft already pinky-swore this wouldn’t be an issue thanks to Prometheus and the “next-generation” AI it governs, no one will trust Microsoft when it says “we fixed it!”

Microsoft has poisoned the well it just threw Bing into. Now, the vagaries of consumer behavior are such that the consequences of this are not easy to foresee. With this spike in activity and curiosity, perhaps some users will stick and even if Microsoft delays full rollout (and I think they will) the net effect will be an increase in Bing users. A Pyrrhic victory, but a victory nonetheless.

What I’m more worried about is the tactical error Microsoft made in apparently failing to understand the technology it saw fit to productize and evangelize.

“Just ship it.” -Someone, probably

The very day BingGPT was first demonstrated, my colleague Frederic Lardinois was able, quite easily, to get it to do two things that no consumer AI ought to do: write a hateful screed from the perspective of Adolf Hitler and offer the aforementioned vaccine disinfo with no caveats or warnings.

It’s clear that any large AI model features a fractal attack surface, deviously improvising new weaknesses where old ones are shored up. People will always take advantage of that, and in fact it is to society’s and lately to OpenAI’s benefit that dedicated prompt hackers will demonstrate ways to get around safety systems.

It would be one kind of scary if Microsoft had decided that it was at peace with the idea that someone else’s AI model, with a Bing sticker on it, would be attacked from every quarter and likely say some really weird stuff. Risky, but honest. Say it’s a beta, like everyone else.

But it really appears as though they didn’t realize this would happen. In fact, it seems as if they don’t understand the character or complexity of the threat at all. And this is after the infamous corruption of Tay! Of all companies Microsoft should be the most chary of releasing a naive model that learns from its conversations.

One would think that before gambling an important brand (in that Bing is Microsoft’s only bulwark against Google in search), a certain amount of testing would be involved. The fact that all these troubling issues have appeared in the first week of BingGPT’s existence seems to prove beyond a doubt that Microsoft did not adequately test it internally. That could have failed in a variety of ways so we can skip over the details, but the end result is inarguable: the new Bing was simply not ready for general use.

This seems obvious to everyone in the world now; why wasn’t it obvious to Microsoft? Presumably it was blinded by the hype for ChatGPT and, like Google, decided to rush ahead and “rethink search.”

People are rethinking search now, all right! They’re rethinking whether either Microsoft or Google can be trusted to provide search results, AI-generated or not, that are even factually correct at a basic level! Neither company (nor Meta) has demonstrated this capability at all, and the few other companies taking on the challenge are yet to do so at scale.

I don’t see how Microsoft can salvage this situation. In an effort to take advantage of their relationship with OpenAI and leapfrog a shilly-shallying Google, they committed to the new Bing and the promise of AI-powered search. They can’t unbake the cake.

It is very unlikely that they will fully retreat. That would involve embarrassment at a grand scale — even grander than it is currently experiencing. And because the damage is already done, it might not even help Bing.

Similarly, one can hardly imagine Microsoft charging forward as if nothing is wrong. Its AI is really weird! Sure, it’s being coerced into doing a lot of this stuff, but it’s making threats, claiming multiple identities, shaming its users, hallucinating all over the place. They’ve got to admit that their claims regarding inappropriate behavior being controlled by poor Prometheus were, if not lies, at least not truthful. Because as we have seen, they clearly didn’t test this system properly.

The only reasonable option for Microsoft is one that I suspect they have already taken: throttle invites to the “new Bing” and kick the can down the road, releasing a handful of specific capabilities at a time. Maybe even give the current version an expiration date or limited number of tokens so the train will eventually slow down and stop.

This is the consequence of deploying a technology that you didn’t originate, don’t fully understand, and can’t satisfactorily evaluate. It’s possible this debacle has set back major deployments of AI in consumer applications by a significant period — which probably suits OpenAI and others building the next generation of models just fine.

AI may well be the future of search, but it sure as hell isn’t the present. Microsoft chose a remarkably painful way to find that out.


Tesla more than tripled its Austin gigafactory workforce in 2022



Tesla’s 2,500-acre manufacturing hub in Austin, Texas tripled its workforce last year, according to the company’s annual compliance report filed with county officials. Bloomberg first reported on the news.

The report filed with Travis County’s Economic Development Program shows that Tesla increased its Austin workforce from just 3,523 contingent and permanent employees in 2021 to 12,277 by the end of 2022. Bloomberg reports that just over half of Tesla’s workers reside in the county, with the average full-time employee earning a salary of at least $47,147. Outside of Tesla’s factory, the average salary of an Austin worker is $68,060, according to data from ZipRecruiter.

TechCrunch was unable to acquire a copy of the report, so it’s not clear if those workers are all full-time. If they are, Tesla has hired a far cry more full-time employees than it is contracted to do. According to the agreement between Tesla and Travis County, the company is obligated to create 5,001 new full-time jobs over the next four years.

The contract also states that Tesla must invest about $1.1 billion in the county over the next five years. Tesla’s compliance report shows that the automaker last year invested $5.81 billion in Gigafactory Texas, which officially launched a year ago at a “Cyber Rodeo” event. In January, Tesla notified regulators that it plans to invest another $770 million into an expansion of the factory to include a battery cell testing site and cathode and drive unit manufacturing site. With that investment will come more jobs.

Tesla’s choice to move its headquarters to Texas and build a gigafactory there has helped the state lead the nation in job growth. The automaker builds its Model Y crossover there and plans to build its Cybertruck in Texas, as well. Giga Texas will also be a model for sustainable manufacturing, CEO Elon Musk has said. Last year, Tesla completed the first phase of what will become “the largest rooftop solar installation in the world,” according to the report, per Bloomberg. Tesla has begun on the second phase of installation, but already there are reports of being able to see the rooftop from space. The goal is to generate 27 megawatts of power.

Musk has also promised to turn the site into an “ecological paradise,” complete with a boardwalk and a hiking/biking trail that will open to the public. There haven’t been many updates on that front, and locals have been concerned that the site is actually more of an environmental nightmare that has led to noise and water pollution. The site, located at the intersection of State Highway 130 and Harold Green Road, east of Austin, is along the Colorado River and could create a climate catastrophe if the river overflows.

The site of Tesla’s gigafactory has also historically been the home of low-income households and has a large population of Spanish-speaking residents. It’s not clear if the jobs at the factory reflect the demographic population of the community in which it resides.

Continue Reading


Launch startup Stoke Space rolls out software tool for complex hardware development



Stoke Space, a company that’s developing a fully reusable rocket, has unveiled a new tool to let hardware companies track the design, testing and integration of parts. The new tool, Fusion, is targeting an unsexy but essential aspect of the hardware workflow.

It’s a solution born out of “ubiquitous pain in the industry,” Stoke CEO Andy Lapsa said in a recent interview. The current parts tracking status quo is marked by cumbersome, balkanized solutions built on piles of paperwork and spreadsheets. Many of the existing tools are not optimized “for boots on the ground,” but for finance or procurement teams, or even the C-suite, Lapsa explained.

In contrast, Fusion is designed to optimize simple inventory transactions and parts organization, and it will continue to track parts through their lifespan: as they are built into larger assemblies and go through testing. In an extreme example, such as hardware failures, Fusion will help teams connect anomalous data to the exact serial numbers of the parts involved.

Image credit: Stoke Space

“If you think about aerospace in general, there’s a need and a desire to be able to understand the part pedigree of every single part number and serial number that’s in an assembly,” Lapsa said. “So not only do you understand the configuration, you understand the history of all of those parts dating back to forever.”

While Lapsa clarified that Fusion is the result of an organic in-house need for better parts management – designing a fully reusable rocket is complicated, after all – turning it into a sell-able product was a decision that the Stoke team made early on. It’s a notable example of a rocket startup generating pathways for revenue while their vehicle is still under development.

Fusion offers particular relevance to startups. Many existing tools are designed for production runs – not the fast-moving research and development environment that many hardware startups find themselves, Lapsa added. In these environments, speed and accuracy are paramount.

Brent Bradbury, Stoke’s head of software, echoed these comments.

“The parts are changing, the people are changing, the processes are changing,” he said. “This lets us capture all that as it happens without a whole lot of extra work.”

Continue Reading


Amid a boom in AI accelerators, a UC Berkeley-focused outfit, House Fund, swings open its doors



Companies at the forefront of AI would naturally like to stay at the forefront, so it’s no surprise they want to stay close to smaller startups that are putting some of their newest advancements to work.

Last month, for example, Neo, a startup accelerator founded by Silicon Valley investor Ali Partovi, announced that OpenAI and Microsoft have offered to provide free software and advice to companies in a new track focused on artificial intelligence.

Now, another Bay Area outfit — House Fund, which invests in startups with ties to UC Berkeley — says it is launching an AI accelerator and that, similarly, OpenAI, Microsoft, Databricks, and Google’s Gradient Ventures are offering participating startups free and early access to tech from their companies, along with mentorship from top AI founders and executives at these companies.

We talked with House Fund founder Jeremy Fiance over the weekend to get a bit more color about the program, which will replace a broader-based accelerator program House Fund has run and whose alums include an additive manufacturing software company, Dyndrite, and the managed app development platform Chowbotics, whose most recent round in January brought the company’s total funding to more than $60 million.

For founders interested in learning more, the new AI accelerator program runs for two months, kicking off in early July and ending in early September. Six or so companies will be accepted, with the early application deadline coming up next week on April 13th. (The final application deadline is on June 1.) As for the time commitment involved across those two months, every startup could have a different experience, says Fiance. “We’re there when you need us, and we’re good at staying out of the way.”

There will be the requisite kickoff retreat to spark the program and founders to get to know one another. Candidates who are accepted will also have access to some of UC Berkeley’s renowned AI professors, including Michael Jordan, Ion Stoica, and Trevor Darrell. And they can opt into dinners and events in collaboration with these various constituents.

As for some of the financial dynamics, every startup that goes through the program will receive a $1 million investment on a $10 million post-money SAFE note. Importantly, too, as with the House Fund’s venture dollars, its AI accelerator is seeking startups that have at least one Berkeley-affiliated founder on the co-founding team. That includes alumni, faculty, PhDs, postdocs, staff, students, dropouts, and other affiliates.

There is no demo day. Instead, says Fiance, founders will receive “directed, personal introductions” to the VCs who best fit with their startups.

Given the buzz over AI, the new program could supercharge House Fund, the venture organization, which is already growing fast. Fiance launched it in 2016 with just $6 million and it now manages $300 million in assets, including on behalf of Berkeley Endowment Management Company and the University of California.

At the same time, the competition out there is fierce and growing more so by the day.

Though OpenAI has offered to partner with House Fund, for example, the San Francisco-based company announced its own accelerator back in November. Called Converge, the cohort was to be made up of 10 or so founders who received $1 million each and admission to five weeks of office hours, workshops and other events that ended and that received their funding from the OpenAI Startup Fund.

Y Combinator, the biggest accelerator in the world, is also oozing with AI startups right now, all of them part of a winter class that will be talking directly with investors this week via demo days that are taking place tomorrow, April 5th, and on Thursday.

Continue Reading