Connect with us


TikTok updates its policies with focus on minor and LGBTQ safety, age appropriate content and more



Months after TikTok was hauled into its first-ever major congressional hearing over platform safety, the company is today announcing a series of policy updates and plans for new features and technologies aimed at making the video-based social network a safer and more secure environment, particularly for younger users. The changes attempt to address some concerns raised by U.S. senators during their inquiries into TikTok’s business practices, including the prevalence of eating disorder content and dangerous hoaxes on the app, which are particularly harmful to teens and young adults. In addition, TikTok is laying out a roadmap for addressing other serious issues around hateful ideologies with regard to LGBTQ and minor safety — the latter which will involve having creators designate the age-appropriateness of their content.

TikTok also said it’s expanding its policy to protect the “security, integrity, availability, and reliability” of its platform. This change follows recent news that the Biden administration is weighing new rules for Chinese apps to protect U.S. user data from being exploited by foreign adversaries. The company said it will open cyber incident monitoring and investigative response centers in Washington, D.C., Dublin and Singapore this year, as part of this expanded effort to better prohibit unauthorized access to TikTok content, accounts, systems and data.

Another one of the bigger changes ahead for TikTok is a new approach to age-appropriate design — a topic already front of mind for regulators.

In the U.K., digital services aimed at children now have to abide by legislative standards that address children’s privacy, tracking, parental controls, the use of manipulative “dark patterns” and more. In the U.S., meanwhile, legislators are working to update the existing children’s privacy law (COPPA) to add more protection for teens. TikTok already has different product experiences for users of different ages, but it now wants to also identify which content is appropriate for younger and older teens versus adults.

Image Credits: TikTok’s age-appropriate design

TikTok says it’s developing a system to identify and restrict certain types of content from being accessed by teens. Though the company isn’t yet sharing specific details about the new system, it will involve three parts. First, community members will be able to choose which “comfort zones” — or levels of content maturity — they want to see in the app. Parents and guardians will also be able to use TikTok’s existing Family Pairing parental control feature to make decisions on this on behalf of their minor children. Finally, TikTok will ask creators to specify when their content is more appropriate for an adult audience.

Image Credits: TikTok’s Family Pairing feature

“We’ve heard directly from our creators that they sometimes have a desire to only reach a specific older audience. So, as an example, maybe they’re creating a comedy that has adult humor, or offering kind of boring workplace tips that are relevant only to adults. Or maybe they’re talking about very difficult life experiences,” explained Tracy Elizabeth, TikTok’s U.S. head of Issue Policy, who oversees minor safety for the platform, in a briefing with reporters. “So given those varieties of topics, we’re testing ways to help better empower creators to reach the intended audience for their specific content,” she noted.

Elizabeth joined TikTok in early 2020 to focus on minor safety and was promoted into her new position in November 2021, which now sees her overseeing the Trust & Safety Issue Policy teams, including Minor Safety, Integrity & Authenticity, Harassment & Bullying, Content Classification and Applied Research teams. Before TikTok, she spent over three and half years at Netflix, where she helped the company establish its global maturity ratings department. That work will inform her efforts at TikTok.

But, Elizabeth notes, TikTok won’t go as far as having “displayable” ratings or labels on TikTok videos, which would allow people to see the age-appropriate nature of a given piece of content at a glance. Instead, TikTok will rely on categorization on the back end, which will lean on having creators tag their own content in some way. (YouTube takes a similar approach, as it asks creators to designate whether their content is either adult or “made for kids,” for example.)

TikTok says it’s running a small test in this area now, but has nothing yet to share publicly.

“We’re not in the place yet where we’re going to introduce the product with all the bells and whistles. But we will experiment with a very small subset of user experiences to see how this is working in practice, and then we will make adjustments,” Elizabeth noted.

Image Credits: TikTok

TikTok’s updated content policies

In addition to its plans for a content maturity system, TikTok also announced today it’s revising its content policies in three key areas: hateful ideologies, dangerous acts and challenges, and eating disorder content.

While the company had policies addressing each of these subjects already, it’s now clarifying and refining these policies and, in some cases, moving them to their own category within its Community Guidelines in order to provide more detail and specifics as to how they’ll be enforced.

In terms of hateful ideologies, TikTok is adding clarity around prohibited topics. The policy will now specify that practices like deadnaming and misgendering, misogyny or content supporting or promoting conversion therapy programs will not be permitted. The company says these subjects were already prohibited, but it heard from creators and civil society organizations that its written policies should be more explicit. GLAAD, which worked with TikTok on the policy, shared a statement from its CEO Sarah Kate in support of the changes, noting that this “raises the standard for LGBTQ safety online” and “sends a message that other platforms which claim to prioritize LGBTQ safety should follow suit with substantive actions like these,” she said.

Another policy being expanded focuses on dangerous acts and challenges. This is an area the company recently addressed with an update to its Safety Center and other resources in the wake of upsetting, dangerous and even fatal viral trends, including “slap a teacher,” the blackout challenge and another that encouraged students to destroy school property. TikTok denied hosting some of this content on its platform, saying for example, that it found no evidence of any asphyxiation challenges on its app, and claiming “slap a teacher” was not a TikTok trend. However, TikTok still took action to add more information about challenges and hoaxes to its Safety Center and added new warnings when such content was searched for on the app, as advised by safety experts and researchers.

Today, TikTok says dangerous acts and challenges will also be broken out into its own policy, and it will launch a series of creator videos as part of a broader PSA-style campaign aimed at helping TikTok’s younger users better assess online content. These videos will relay the message that users should “Stop, Think, Decide, and Act,” when they come across online challenges — meaning, take a moment to pause, consider whether the challenge is real (or check with an adult, if unsure), decide if it’s risky or harmful, then act by reporting the challenge in the app, and by choosing not share it.

Image Credits: TikTok

On the topic of eating disorder content — a major focus of the congressional hearing not only for TikTok, but also for other social networks like Instagram, YouTube and Snapchat — TikTok is taking more concrete steps. The company says it already removes “eating disorder” content, like content that glorifies bulimia or anorexia, but it will now broaden its policy to restrict the promotion of “disordered eating” content. This term aims to encompass other early-stage signs that can later lead to an eating disorder diagnosis, like extreme calorie counting, short-term fasting and even over-exercise. This is a more difficult area for TikTok to tackle because of the nuance involved in making these calls, however.

The company acknowledges that some of these videos may be fine by themselves, but it needs to examine what sort of “circuit breakers” can be put into place when it sees people becoming trapped in filter bubbles where they’re consuming too much of this sort of content. This follows on news TikTok announced in December, where the company shared how its product team and trust and safety team began collaborating on features to help “pop” users’ filter bubbles in order to lead them, by way of recommendations, into other areas for a more diversified experience.

While this trio of policy updates sounds good on paper, enforcement here is critical — and difficult. TikTok has had guidelines against some of this content, but misogyny and transphobic content have slipped through the cracks, repeatedly. At times, violative content was even promoted by TikTok’s algorithms, according to some tests. This sort of moderation failure is an area where TikTok says it aims to learn from and improve.

“At TikTok, we firmly believe that feeling safe is what enables everybody’s creativity to truly thrive and shine. But well-written, nuanced and user-first policies aren’t the finish line. Rather, the strength of any policy lies in enforceability,” said TikTok’s policy director for the U.S. Trust & Safety team, Tara Wadhwa, about the updates. “We apply our policies across all the features that TikTok offers, and in doing so, we absolutely strive to be consistent and equitable in our enforcement,” she said.

At present, content goes through technology that’s been trained to identify potential policy violations, which results in immediate removal if the technology is confident the content is violative. Otherwise, it’s held for human moderation. But this lag time impacts creators, who don’t understand why their content is held for hours (or days!) as decisions are made, or why non-violative content was removed, forcing them to submit appeals. These mistakes — which are often attributed to algorithmic or human errors — can make the creator feel personally targeted by TikTok.

To address moderation problems, TikTok says it’s invested in specialized moderator training in areas like body positivity, inclusivity, civil rights, counter speech and more. The company claims around 1% of all videos uploaded in the third quarter of last year — or 91 million videos — were removed through moderation policies, many before they ever received views. The company today also employs “thousands” of moderators, both as full-time U.S. employees as well as contract moderators in Southeast Asia, to provide 24/7 coverage. And it runs post-mortems internally when it makes mistakes, it says.

However, problems with moderation and policy enforcement become more difficult with scale as there is simply more content to manage. And TikTok has now grown big enough to be cutting into Facebook’s growth as one of the world’s largest apps. In fact, Meta just reported Facebook saw its first-ever decline in users in the fourth quarter, which it blamed, in part, on TikTok. As more young people turn to TikTok as their preferred social network, it will be pressed upon to not just say the right things, but actually get these things right.


A network of knockoff apparel stores exposed 330,000 customer credit cards



If you recently made a purchase from an overseas online store selling knockoff clothes and goods, there’s a chance your credit card number and personal information were exposed.

Since January 6, a database containing hundreds of thousands of unencrypted credit card numbers and corresponding cardholders’ information was spilling onto the open web. At the time it was pulled offline on Tuesday, the database had about 330,000 credit card numbers, cardholder names, and full billing addresses — and rising in real-time as customers placed new orders. The data contained all the information that a criminal would need to make fraudulent transactions and purchases using a cardholder’s information.

The credit card numbers belong to customers who made purchases through a network of near-identical online stores claiming to sell designer goods and apparel. But the stores had the same security problem in common: any time a customer made a purchase, their credit card data and billing information was saved in a database, which was left exposed to the internet without a password. Anyone who knew the IP address of the database could access reams of unencrypted financial data.

Anurag Sen, a good-faith security researcher, found the exposed credit card records and asked TechCrunch for help in reporting it to its owner. Sen has a respectable track record of scanning the internet looking for exposed servers and inadvertently published data, and reporting it to companies to get their systems secured.

But in this case, Sen wasn’t the first person to discover the spilling data. According to a ransom note left behind on the exposed database, someone else had found the spilling data and, instead of trying to identify the owner and responsibly reporting the spill, the unnamed person instead claimed to have taken a copy of the entire database’s contents of credit card data and would return it in exchange for a small sum of cryptocurrency.

A review of the data by TechCrunch shows most of the credit card numbers are owned by cardholders in the United States. Several people we contacted confirmed that their exposed credit card data was accurate.

TechCrunch has identified several online stores whose customers’ information was exposed by the leaky database. Many of the stores claim to operate out of Hong Kong. Some of the stores are designed to sound similar to big-name brands, like Sprayground, but whose websites have no discernible contact information, typos and spelling mistakes, and a conspicuous lack of customer reviews. Internet records also show the websites were set up in the past few weeks.

Some of these websites include:


If you bought something from one of those sites in the past few weeks, you might want to consider your banking card compromised and contact your bank or card provider.

It’s not clear who is responsible for this network of knockoff stores. TechCrunch contacted a person via WhatsApp whose Singapore-registered phone number was listed as the point of contact on several of the online stores. It’s not clear if the contact number listed is even involved with the stores, given one of the websites listed its location as a Chick-fil-A restaurant in Houston, Texas.

Internet records showed that the database was operated by a customer of Tencent, whose cloud services were used to host the database. TechCrunch contacted Tencent about its customer’s database leaking credit card information, and the company responded quickly. The customer’s database went offline a short time later.

“When we learned of the incident, we immediately contacted the customer who operates the database and it was shut down immediately. Data privacy and security are top priorities at Tencent. We will continue to work with our customers to ensure they maintain their databases in a safe and secure manner,” said Carrie Fan, global communications director at Tencent.

Read more:

Continue Reading


All Raise CEO steps down again



Less than a year after assuming the role, All Raise CEO Mandela SH Dixon has stepped down from her position at the nonprofit. The entrepreneur, who previously ran Founder Gym, an online training center for underrepresented founders, said in a blog post that the decision was made after she realized “being in the field working directly with entrepreneurs everyday” is her passion. Dixon said that she will be exploring new opportunities in alignment with that.

Her resignation is effective starting February 1st, 2023. She will remain an advisor to the Bay Area-based nonprofit.

This is the second chief executive to leave All Raise since it was first founded in 2017. In 2021, Pam Kostka resigned as the helm of the nonprofit to rejoin the startup world as well; Kostka is now an operator in residence and limited partner at Operator Collective, according to her LinkedIn. With Dixon gone, Paige Hendrix Buckner, who joined the outfit as chief of staff nine months ago, will step in as interim CEO. In the same blog post, Buckner wrote that “Mandela leaves All Raise in a strong position, and I’m grateful for the opportunity to continue the hard work of diversifying the VC backed ecosystem.”

Dixon did not immediately respond to comment on the record. It is unclear if All Raise is immediately kicking off a permanent CEO search.

The nonprofit has historically defined its goals in two ways: first, it wants to increase the amount of seed funding that goes to female founders from 11% to 23% by 2030, and, second, it wants to double the percentage of female decision-makers at U.S. firms by 2028.

In previous interviews, Dixon said that the company will work on creating explicit goals around what impact it wants to have for historically overlooked individuals. The data underscores the challenge ahead. Black and LatinX women receive disproportionately less venture capital money than white women; non-binary founders can also face higher hurdles when seeking funding, as All Raise board member Aileen Lee noted in the blog post.  The nonprofit has created specific programs for Black and Latinx founders but has not disclosed a specific goal for the cohort yet. These disconnects can be lost if not tracked. All Raise’s last impact report was published in 2020 and they’re working on bringing that analysis back, Lee tells TechCrunch in an interview.

“All Raise is in great hands with Paige as interim leader and we’ve got a lot of exciting things that we’re shaping and scaling,” Lee said. “We have to all continue to link arms to try and continue to make improvements for our industry…we’ve made good progress that we can’t let up.”

Since launch, the nonprofit has raised $11 million in funding, and opened regional chapters in New York, Boston, Los Angeles, Chicago, DC and, soon, Miami.

Continue Reading


Shopping app Temu is using TikTok’s strategy to keep its No. 1 spot on App Store



Temu, a shopping app from Chinese e-commerce giant Pinduoduo, is having quite the run as the No. 1 app on the U.S. app stores. The mobile shopping app hit the top spot on the U.S. App Store in September and has continued to hold a highly-ranked position in the months that followed, including as the No. 1 free app on Google Play since December 29, 2022. More recently, Temu again snagged the No. 1 position again on the iOS App Store on January 3 and hasn’t dropped since — even outpacing competitor Shein’s daily installs in the U.S.

Offering cheap factory-to-consumer goods, Temu provides access to a wide range of products, including fast fashion, and pushes users to share the app with friends in exchange for free products, which may account for some of its growth. However, the large majority of its new installs come from Temu’s marketing spend, it seems.

When TechCrunch covered Temu’s rise in November, the app had then seen a little more than 5 million installs in the U.S., according to data from app intelligence firm Sensor Tower, making the U.S. its largest market. Now, the firm says the app has seen 5 million U.S. installs this January alone, up 19% from 4.2 million in the prior 22 days from December 10 through December 31.

According to Sensor Tower estimates, Temu has managed to achieve a total of 19 million lifetime installs across the U.S. App Store and Google Play, more than 18 million of which came from the U.S.

The growth now sees Temu outpacing rival Shein in terms of daily installs. In October, Temu was averaging around 43,000 daily installs in the U.S., the firm said, while Shein averaged about 62,000. In November, Temu’s average daily installs grew to 185,000 while Shein’s climbed to 70,000 and last month, Temu averaged 187,000 installs while Shein saw about 62,000.

The shopping app’s fast rise recalls how the video entertainment platform TikTok grew to become the most downloaded app worldwide in 2021, after years of outsized growth. The video app topped 2 billion lifetime downloads by 2020, including sister app Douyin in China, Sensor Tower said. Combined, the TikTok apps have now reached 4.1 billion installs.

Like Temu, much of TikTok’s early growth was driven by marketing spend. The video app grew its footprint in the U.S. and abroad by heavily leveraging Facebook, Instagram, and Snapchat’s own ad platforms to acquire its customers. TikTok was famously said to have spent $1 billion on ads in 2018, even becoming Snap’s biggest advertiser that year, for instance.

By investing in user acquisition upfront, TikTok was able to gain a following which then improved its ability to personalize its For You feed with recommendations. Over time, this algorithm became very good at recognizing what videos would attract the most interest thanks to this investment, turning TikTok into one of the most addictive apps in terms of time spent. As of 2020, kids and teens began spending more time watching TikTok than they did on YouTube. And earlier this month, Insider Intelligence data indicated all TikTok users in the U.S. were now spending an average of nearly 1 hour per day on the app (55.8 minutes), compared with just 47.5 minutes on YouTube, including YouTube TV.

While Temu is nowhere near TikTok’s sky-high figures, it appears to be leveraging a similar growth strategy. The company is heavily investing in advertising to acquire users, which it uses to personalize the shopping experience. One of Temu’s key features, in fact, is its own sort of For You page that encourages users to browse trending items “Selected for You.” In addition to gamification elements, Temu also puts heavy emphasis on recommending shops and products on its home page, which is informed by its user data.

But the app’s growth doesn’t seem to be driven by social media. While the Temu hashtag (#temu) on TikTok is nearing 250 million views, that’s not really a remarkable number for an app as big as TikTok where something like #dogs has 120.5 billion views. (Or, for a more direct comparison, #shein has 48.3 billion views.) That suggests Temu’s rise isn’t necessarily powered by viral videos among Gen Z users or influencer marketing, but rather more traditional digital advertising.

According to Meta’s ad library, for instance, Temu has run some 8,800 ads across Meta’s various platforms just this month. The ads promote Temu’s sales and its extremely discounted items, like $5 necklaces, $4 shirts, and $13 shoes, among other deals. These ads appear to be working to boost Temu’s installs, allowing the app to maintain its No. 1 slot on the App Store’s “Top Free” charts, which are heavily influenced by the number of downloads and download velocity, among other things.

Of course, having a high number of downloads doesn’t necessarily mean Temu’s app will maintain a high number of monthly active users. Nor does it mean those users won’t churn out of the app after their initial curiosity has been abated. Still, Temu’s download growth saw it ranking as the No. 1 “Breakout” shopping app by downloads in the U.S. for 2022, according to’s year-end “State of Mobile” report. ( calculates “Breakout” apps in terms of year-over-year growth across iOS and Google Play.)

Because Temu’s growth is more recent, the app did not earn a position on the Top 10 apps in 2022 in either the U.S. or globally in terms of downloads, consumer spend, or monthly active users, on this report. Instead, most of those spots still went to social media apps, streamers, and dating apps like Bumble and Tinder. The only retailer to find a spot on these lists was Amazon, which was the No. 7 app worldwide by active users and the No. 8 most downloaded in the U.S.

Temu’s marketing investment may not pay off as well as TikTok’s did, though, as other discount shopping apps saw similar growth only to later fail as consumers found that, actually, $2 shirts and jeans were deals that were too good to be true. Wish famously fumbled as consumers grew frustrated with long delivery times, fake listings, missing orders, poor customer service, and other things consumers expect from online retail in the age of Amazon.

Temu today holds a 4.7-star rating on the U.S. App Store, but those ratings have become less trustworthy over the years due to the ease with which companies can get away with fake reviews. Dig into the reviews further and you’ll find similar complaints to Wish, including scammy listings, damaged and delayed deliveries, incorrect orders and lack of customer service. Without addressing these issues, Temu seems more likely to go the way of Wish, not TikTok, no matter what it spends.

Continue Reading