Cynthia Lummis Proposes Artificial Intelligence Bill, Requiring AI Firms to Disclose Technicals

Date:

Share post:

Senator Cynthia Lummis (R-WY) has introduced the Responsible Innovation and Safe Expertise (RISE) Act of 2025, a legislative proposal designed to clarify liability frameworks for artificial intelligence (AI) used by professionals.

The bill could bring transparency from AI developers – stoping short of requiring models to be open source.

In a press release, Lummis said the RISE Act would mean that professionals, such as physicians, attorneys, engineers, and financial advisors, remain legally responsible for the advice they provide, even when it is informed by AI systems.

At the time, AI developers who create the systems can only shield themselves from civil liability when things go awry if they publicly release model cards.

The proposed bill defines model cards as detailed technical documents that disclose an AI system’s training data sources, intended use cases, performance metrics, known limitations, and potential failure modes. All this is intended to help help professionals assess whether the tool is appropriate for their work.

“Wyoming values both innovation and accountability; the RISE Act creates predictable standards that encourage safer AI development while preserving professional autonomy,” Lummis said in a press release.

“This legislation doesn’t create blanket immunity for AI,” Lummis continued.

However, the immunity granted under this Act has clear boundaries. The legislation excludes protection for developers in instances of recklessness, willful misconduct, fraud, knowing misrepresentation, or when actions fall outside the defined scope of professional usage.

Additionally, developers face a duty of ongoing accountability under the RISE Act. AI documentation and specifications must be updated within 30 days of deploying new versions or discovering significant failure modes, reinforcing continuous transparency obligations.

Stops short of open source

The RISE Act, as it’s written now, stops short of mandating that AI models become fully open source.

Developers can withhold proprietary information, but only if the redacted material isn’t related to safety, and each omission is accompanied by a written justification explaining the trade secret exemption.

In a prior interview with CoinDesk, Simon Kim, the CEO of Hashed, one of Korea’s leading VC funds, spoke about the danger of centralized, closed-source AI that’s effectively a black box.

“OpenAI is not open, and it is controlled by very few people, so it’s quite dangerous. Making this type of [closed source] foundational model is similar to making a ‘god’, but we don’t know how it works,” Kim said at the time.

Leave a reply

Please enter your comment!
Please enter your name here

spot_img

Related articles

Use XRP, DOGE, and BTC to participate in Ripplecoin Mining cloud mining and easily earn $6,800 in passive income every day

Although the overall crypto market has recently pulled back, the price of Bitcoin has fallen from its high...

PBK Miner leads 8 million users into the era of automated cloud mining, earning from $11,000 per day

As the price of Bitcoin (BTC) surges to over $110,000, cryptocurrency enthusiasts are increasingly interested in ways to...

XRP to $5 or Ozak AI to $1—Analysts Predict Which Asset Could Deliver Superior ROI in 2025

Two very different crypto projects are capturing investor attention for their explosive 2025 potential: Ripple’s XRP, aiming for...

HJB MINER launches new high-yield investment opportunities, investors can earn more than $10,000 per day

HJB MINER, the leading cloud mining platform, has launched its latest series of high-yield cloud mining contracts, providing...