7-minute read | 1,500 words
What to know this week
India proposes forcing smartphone makers to give source code.
India’s new security standards proposal seeks to greatly expand the nation’s oversight over mobile phones.
OpenAI reaches a deal with a kids’ safety group.
OpenAI has partnered with a leading kids’ online safety group on a key ballot initiative.
This week's full stories
India looks to force smartphone makers to submit source codes.
THE NEWS
On Monday, India proposed a new rule that, if implemented, would require smartphone makers to share their source code with the government and would make several software changes for new security measures. These rule changes come as part of a package of eighty-three new security standards.
Some of these new standards include:
- Preventing apps from accessing the camera, microphones, and location services when phones are inactive.
- Requiring devices to prompt users periodically with display warnings to review app permissions.
- Mandating that devices store security audit logs for twelve months.
- Requiring phones to periodically scan for malware and harmful applications.
- Ensuring that pre-installed applications are deletable, except those needed for basic functions.
- Requiring device makers to notify the government of security updates before releasing them to the general public.
This effort comes as Prime Minister Narendra Modi looks to increase the security of user data, as online fraud and data breaches have continued to rise in India.
IT Secretary S. Krishnan commented on these efforts, stating:
“Any legitimate concerns of the industry will be addressed with an open mind.”
THE KNOWLEDGE
These latest security standards and requirements come as tensions between the Indian government and phone makers already remain high. At the end of 2025, India unsuccessfully attempted to require smartphone makers to pre-install a state-owned cybersecurity application.
At the time, India argued that this application, known as Sanchar Saathi, would enable the government to help prevent cyber threats and assist in efforts to track and block lost or stolen phones. Additionally, manufacturers would also have been tasked with ensuring that users could not disable or restrict the application’s features. The effort received substantial pushback, resulting in the main opposition party demanding that the order be rolled back. Afterward, the government reversed its decision.
Soon after, India announced that it was reviewing another proposal that would increase phone-location surveillance capabilities. More specifically, the proposal would require smartphone firms to always enable satellite location tracking for mobile devices. Like the previous measure, this effort also received criticism from privacy advocates and phone makers alike, arguing that the effort would be a significant regulatory overreach.
Each of these efforts demonstrates a clear agenda by the Modi administration to increase its oversight of phone surveillance and security, which privacy advocates and industry groups argue would be unprecedented compared to other major democracies.
THE IMPACT
India’s proposed standards mark another step in the Modi administration’s broader push to exert greater control over smartphones and their underlying software. While officials argue these measures are necessary to combat fraud, theft, and data breaches, privacy advocates and device makers warn that requirements such as source-code disclosure and advance notice of security updates would be unprecedented among major democracies and could undermine user privacy, platform security, and intellectual property protections.
If implemented, the rules could significantly alter how smartphone companies operate in India. Compliance costs may rise, product update timelines could slow, and some manufacturers may reconsider how they deploy features in India. More broadly, the proposal underscores a growing tension between national security objectives for the Modi administration and digital privacy.
As regulators consider feedback, this proposal will likely shape future debates over surveillance, data protection, and platform governance in India. Companies operating in the market will need to closely track how these standards evolve to avoid regulatory exposure and prepare for potentially far-reaching operational changes.
OpenAI partners with kids’ safety group on ballot measure.
THE NEWS
On Friday, OpenAI announced that it has reached a deal with the children’s online safety group Common Sense Media regarding a California ballot initiative. This ballot initiative centers on allowing California voters to decide on new rules governing how minors use and engage with artificial intelligence (AI) chatbots.
With this proposal, California would now create new requirements for AI companies to determine a user’s age and implement safeguards for young people using their products, alongside limiting the sale of young people’s data.
In a joint press conference with OpenAI and Common Sense Media, Common Sense Media CEO Jim Steyer stated:
“Rather than confusing the voters with competing ballot initiatives on AI, we decided to work together and to enact the strongest protections in the country for kids, teens, and families.”
Currently, the measure still needs to collect enough signatures to make it on the November ballot later this year.
THE KNOWLEDGE
This ballot initiative comes shortly after the state signed a measure last year to increase regulatory oversight of AI chatbots. SB 243, also known as the Companion Chatbot law, was passed in October 2025 and went into effect at the beginning of 2026. Through this law, California introduced new operational requirements mandating covered entities to comply with the following:
- Disclose to users that they are interacting with an AI system.
- Mandate that covered chatbots implement protocols that prevent chatbots from creating harmful content.
- Requiring chatbot operators to include a disclosure that the chatbot may not be suitable for minors.
- Enforce extra safeguards for minors to protect them from sexually explicit conduct and enforce notification requirements.
This law is a part of a greater effort within California to improve regulatory oversight of AI developers and deployers. Alongside these efforts is SB 53, or the Transparency in Frontier AI Act. Through this act, California targeted “frontier” AI systems by increasing transparency reporting requirements, establishing Frontier AI frameworks, creating stronger whistleblower protection, and mandating incident reporting requirements.
Given California’s continued effort to regulate AI developers and deployers, it is likely that this latest initiative and other similar ones will continue to gain traction throughout 2026.
THE IMPACT
Even if this proposal is folded into another legislative package before reaching the November ballot, the initiative signals a meaningful escalation in California’s approach to regulating AI chatbots and how minors engage with them. If successful, the measure could establish one of the most comprehensive state frameworks for youth safeguards and data protection.
In the short term, California residents, AI companies, and advocacy groups should closely monitor whether the initiative advances to the ballot and how its provisions evolve. Regardless of its final version, this effort aims to reinforce a clear effort that ensures that AI chatbots are better regulated when engaging with minors.
This Week's Caveat Podcast: Consent is not optional.
Dave Bittner and Ben Yelin break down the ongoing saga regarding Grok creating nonconsensual sexually explicit images. For context, after xAI debuted Grok’s new image and video editing capabilities, users began prompting the AI chatbot to edit photos to remove clothing or create explicit imagery. In response to this problem, Indonesia and Malaysia have temporarily blocked access to Grok. Alongside these stories, Dave Bittner also sits down with Caitlin Clarke, the Senior Director for Cybersecurity Services at Venable, to discuss CISA 2015.
OTHER NOTEWORTHY STORIES
Supreme Court will hear Cisco’s appeal in Falun Gong lawsuit.
What: The Supreme Court has agreed to hear Cisco’s appeal of the 2011 Falun Gong lawsuit.
Why: On Friday, the Supreme Court announced it would hear Cisco’s appeal to a lawsuit, which alleged that the company aided the Chinese government’s persecution of the Falun Gong. In this lawsuit, a group of Chinese nationals and a US citizen alleged that Cisco enabled China’s persecution of the spiritual movement by selling products that would aid in tracking Falun Gong members and persecuting them.
This case revolves around determining the scope of the Alien Tort Statute (ATS), which allows foreigners to bring lawsuits in the US for violations of international law.
Solicitor General D John Sauer commented on the case, voicing support for Cisco, stating:
“By requiring federal courts to determine whether the underlying conduct of foreign governments and officials was unlawful, aiding-and-abetting actions pose significant risks to the [US’] relations with foreign states and to the political branches’ ability to conduct the Nation’s foreign policy.”
A decision is expected to be reached by the summer.
Jan 9, 2026 | Source: The Hill
Qatar and UAE to join US-led effort to secure supply chains.
What: Qatar and the United Arab Emirates (UAE) announced that they would join an initiative to secure AI and semiconductor supply chains.
Why: On Sunday, Qatar and the UAE announced that they would support Pax Silica, a program that aims to establish new safeguards to support the full technology supply chain. With this program, the group aims to better secure critical minerals, advanced manufacturing, computing, and data infrastructure.
Jacob Helberg, the Undersecretary of State for Economic Affairs, stated:
“The Silicon Declaration isn’t just a diplomatic communique. It’s meant to be an operational document for a new economic security consensus.”
This group includes the United States, Israel, Japan, South Korea, Singapore, Britain, and Australia, with Qatar having signed the declaration on Monday, and the UAE on Wednesday.
JAN 11, 2026 | Source: Reuters
