Amendments for SB24-205
Senate Journal, April 25
After consideration on the merits, the Committee recommends that SB24-205 be amended
as follows, and as so amended, be referred to the Committee of the Whole with favorable
recommendation.
Amend printed bill, strike everything below the enacting clause and substitute:
"SECTION 1. In Colorado Revised Statutes, add part 16 to article 1
of title 6 as follows:
PART 16
ARTIFICIAL INTELLIGENCE
6-1-1601. Definitions. AS USED IN THIS PART 16, UNLESS THE CONTEXT
OTHERWISE REQUIRES:
(1) (a) "ALGORITHMIC DISCRIMINATION" MEANS ANY CONDITION IN
WHICH AN ARTIFICIAL INTELLIGENCE SYSTEM MATERIALLY INCREASES THE RISK
OF AN UNLAWFUL DIFFERENTIAL TREATMENT OR IMPACT THAT DISFAVORS AN
INDIVIDUAL OR GROUP OF INDIVIDUALS ON THE BASIS OF THEIR ACTUAL OR
PERCEIVED AGE, COLOR, DISABILITY, ETHNICITY, GENETIC INFORMATION,
LIMITED PROFICIENCY IN THE ENGLISH LANGUAGE, NATIONAL ORIGIN, RACE,
RELIGION, REPRODUCTIVE HEALTH, SEX, VETERAN STATUS, OR OTHER
CLASSIFICATION PROTECTED UNDER THE LAWS OF THIS STATE OR FEDERAL LAW.
(b) "ALGORITHMIC DISCRIMINATION" DOES NOT INCLUDE:
(I) THE OFFER, LICENSE, OR USE OF A HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM BY A DEVELOPER OR DEPLOYER FOR THE SOLE PURPOSE
OF:
(A) THE DEVELOPER'S OR DEPLOYER'S SELF-TESTING TO IDENTIFY,
MITIGATE, OR PREVENT DISCRIMINATION OR OTHERWISE ENSURE COMPLIANCE
WITH STATE AND FEDERAL LAW; OR
(B) EXPANDING AN APPLICANT, CUSTOMER, OR PARTICIPANT POOL TO
INCREASE DIVERSITY OR REDRESS HISTORICAL DISCRIMINATION; OR
(II) AN ACT OR OMISSION BY OR ON BEHALF OF A PRIVATE CLUB OR
OTHER ESTABLISHMENT THAT IS NOT IN FACT OPEN TO THE PUBLIC, AS SET
FORTH IN TITLE II OF THE FEDERAL "CIVIL RIGHTS ACT OF 1964", 42 U.S.C. SEC.
2000a (e), AS AMENDED.
(2) "ARTIFICIAL INTELLIGENCE SYSTEM" MEANS ANY MACHINE-BASED
SYSTEM THAT, FOR ANY EXPLICIT OR IMPLICIT OBJECTIVE, INFERS FROM THE
INPUTS THE SYSTEM RECEIVES HOW TO GENERATE OUTPUTS, INCLUDING
CONTENT, DECISIONS, PREDICTIONS, OR RECOMMENDATIONS, THAT CAN
INFLUENCE PHYSICAL OR VIRTUAL ENVIRONMENTS.
(3) "CONSEQUENTIAL DECISION" MEANS A DECISION THAT HAS A
MATERIAL LEGAL, OR SIMILARLY SIGNIFICANT, EFFECT ON A CONSUMER'S
ACCESS TO, OR THE AVAILABILITY, COST, OR TERMS OF:
(a) A CRIMINAL CASE ASSESSMENT, A SENTENCING OR PLEA AGREEMENT
ANALYSIS, OR A PARDON, PAROLE, PROBATION, OR RELEASE DECISION;
(b) EDUCATION ENROLLMENT OR AN EDUCATION OPPORTUNITY;
(c) EMPLOYMENT OR AN EMPLOYMENT OPPORTUNITY;
(d) AN ESSENTIAL UTILITY, INCLUDING ELECTRICITY, HEAT, INTERNET
OR TELECOMMUNICATIONS ACCESS, TRANSPORTATION, OR WATER;
(e) A FINANCIAL OR LENDING SERVICE;
(f) AN ESSENTIAL GOVERNMENT SERVICE;
(g) A HEALTH-CARE SERVICE;
(h) HOUSING;
(i) INSURANCE; OR
(j) A LEGAL SERVICE.
(4) "CONSUMER" MEANS AN INDIVIDUAL WHO IS A COLORADO
RESIDENT.
(5) "DEPLOY" MEANS TO USE A HIGH-RISK ARTIFICIAL INTELLIGENCE
SYSTEM.
(6) "DEPLOYER" MEANS A PERSON DOING BUSINESS IN THIS STATE THAT
DEPLOYS A HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM.
(7) "DEVELOPER" MEANS A PERSON DOING BUSINESS IN THIS STATE
THAT DEVELOPS OR INTENTIONALLY AND SUBSTANTIALLY MODIFIES A GENERAL
PURPOSE ARTIFICIAL INTELLIGENCE MODEL OR A HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM.
(8) (a) "GENERAL PURPOSE ARTIFICIAL INTELLIGENCE MODEL" MEANS
ANY FORM OF ARTIFICIAL INTELLIGENCE SYSTEM THAT:
(I) DISPLAYS SIGNIFICANT GENERALITY;
(II) IS CAPABLE OF COMPETENTLY PERFORMING A WIDE RANGE OF
DISTINCT TASKS; AND
(III) CAN BE INTEGRATED INTO A VARIETY OF DOWNSTREAM
APPLICATIONS OR SYSTEMS.
(b) "GENERAL PURPOSE ARTIFICIAL INTELLIGENCE MODEL" DOES NOT
INCLUDE ANY ARTIFICIAL INTELLIGENCE MODEL THAT IS USED FOR
DEVELOPMENT, PROTOTYPING, OR RESEARCH ACTIVITIES BEFORE THE MODEL IS
RELEASED ON THE MARKET.
(9) (a) "HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM" MEANS ANY
ARTIFICIAL INTELLIGENCE SYSTEM THAT, WHEN DEPLOYED, MAKES, OR IS A
SUBSTANTIAL FACTOR IN MAKING, A CONSEQUENTIAL DECISION.
(b) "HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM" DOES NOT INCLUDE:
(I) AN ARTIFICIAL INTELLIGENCE SYSTEM IF THE ARTIFICIAL
INTELLIGENCE SYSTEM IS INTENDED TO:
(A) PERFORM A NARROW PROCEDURAL TASK;
(B) IMPROVE THE RESULT OF A PREVIOUSLY COMPLETED HUMAN
ACTIVITY; OR
(C) DETECT DECISION-MAKING PATTERNS OR DEVIATIONS FROM PRIOR
DECISION-MAKING PATTERNS AND IS NOT INTENDED TO REPLACE OR INFLUENCE
A PREVIOUSLY COMPLETED HUMAN ASSESSMENT WITHOUT SUFFICIENT HUMAN
REVIEW; OR
(II) THE FOLLOWING TECHNOLOGIES, UNLESS THE TECHNOLOGIES, WHEN
DEPLOYED, MAKE, OR ARE A SUBSTANTIAL FACTOR IN MAKING, A
CONSEQUENTIAL DECISION:
(A) ANTI-MALWARE;
(B) ANTI-VIRUS;
(C) CALCULATORS;
(D) DATABASES;
(E) DATA STORAGE;
(F) FIREWALL;
(G) INTERNET DOMAIN REGISTRATION;
(H) INTERNET WEBSITE LOADING;
(I) NETWORKING;
(J) SPAM- AND ROBOCALL-FILTERING;
(K) SPELL-CHECKING;
(L) SPREADSHEETS;
(M) WEB CACHING; OR
(N) WEB HOSTING OR ANY SIMILAR TECHNOLOGY.
(10) (a) "INTENTIONAL AND SUBSTANTIAL MODIFICATION" OR
"INTENTIONALLY AND SUBSTANTIALLY MODIFIES" MEANS A DELIBERATE
CHANGE MADE TO:
(I) AN ARTIFICIAL INTELLIGENCE SYSTEM THAT RESULTS IN ANY NEW
REASONABLY FORESEEABLE RISK OF ALGORITHMIC DISCRIMINATION; OR
(II) A GENERAL PURPOSE ARTIFICIAL INTELLIGENCE MODEL THAT:
(A) AFFECTS THE COMPLIANCE OF A GENERAL PURPOSE ARTIFICIAL
INTELLIGENCE SYSTEM;
(B) MATERIALLY CHANGES THE PURPOSE OF THE GENERAL PURPOSE
ARTIFICIAL INTELLIGENCE SYSTEM; OR
(C) RESULTS IN ANY NEW REASONABLY FORESEEABLE RISK OF
ALGORITHMIC DISCRIMINATION.
(b) "INTENTIONAL AND SUBSTANTIAL MODIFICATION" OR
"INTENTIONALLY AND SUBSTANTIALLY MODIFIES" DOES NOT INCLUDE A CHANGE
MADE TO A HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM, OR THE PERFORMANCE
OF A HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM, IF:
(I) THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM CONTINUES TO
LEARN AFTER THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM IS:
(A) OFFERED, SOLD, LEASED, LICENSED, GIVEN, OR OTHERWISE MADE
AVAILABLE TO A DEPLOYER; OR
(B) DEPLOYED;
(II) THE CHANGE IS MADE TO THE HIGH-RISK ARTIFICIAL INTELLIGENCE
SYSTEM AS A RESULT OF ANY LEARNING DESCRIBED IN SUBSECTION (10)(b)(I)
OF THIS SECTION;
(III) THE CHANGE WAS PREDETERMINED BY THE DEPLOYER, OR A THIRD
PARTY CONTRACTED BY THE DEPLOYER, WHEN THE DEPLOYER OR THIRD PARTY
COMPLETED AN INITIAL IMPACT ASSESSMENT OF SUCH HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM PURSUANT TO SECTION 6-1-1603 (3); AND
(IV) THE CHANGE IS INCLUDED IN TECHNICAL DOCUMENTATION FOR
THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM.
(11) "SUBSTANTIAL FACTOR" MEANS A FACTOR THAT ASSISTS IN
MAKING, AND IS CAPABLE OF ALTERING THE OUTCOME OF, A CONSEQUENTIAL
DECISION.
(12) "SYNTHETIC DIGITAL CONTENT" MEANS DIGITAL CONTENT,
INCLUDING AUDIO, IMAGES, TEXT, OR VIDEOS, THAT IS PRODUCED OR
MANIPULATED BY AN ARTIFICIAL INTELLIGENCE SYSTEM, INCLUDING A GENERAL
PURPOSE ARTIFICIAL INTELLIGENCE MODEL.
(13) "TRADE SECRET" HAS THE MEANING SET FORTH IN SECTION
7-74-102 (4).
6-1-1602. Developer duty to avoid algorithmic discrimination -
required documentation. (1) ON AND AFTER OCTOBER 1, 2025, A DEVELOPER
OF A HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM SHALL USE REASONABLE
CARE TO PROTECT CONSUMERS FROM ANY KNOWN OR REASONABLY
FORESEEABLE RISKS OF ALGORITHMIC DISCRIMINATION. IN ANY ENFORCEMENT
ACTION BROUGHT ON OR AFTER OCTOBER 1, 2025, BY THE ATTORNEY GENERAL
OR A DISTRICT ATTORNEY PURSUANT TO SECTION 6-1-1608, THERE IS A
REBUTTABLE PRESUMPTION THAT A DEVELOPER USED REASONABLE CARE AS
REQUIRED UNDER THIS SECTION IF THE DEVELOPER COMPLIED WITH THIS
SECTION.
(2) ON AND AFTER OCTOBER 1, 2025, AND EXCEPT AS PROVIDED IN
SUBSECTION (6) OF THIS SECTION, A DEVELOPER OF A HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM SHALL MAKE AVAILABLE TO THE DEPLOYER OF THE
HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM:
(a) A GENERAL STATEMENT DESCRIBING THE INTENDED USES OF THE
HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM;
(b) DOCUMENTATION DISCLOSING:
(I) KNOWN OR REASONABLY FORESEEABLE LIMITATIONS OF THE
HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM, INCLUDING KNOWN OR
REASONABLY FORESEEABLE RISKS OF ALGORITHMIC DISCRIMINATION ARISING
FROM THE INTENDED USES OF THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM;
(II) THE PURPOSE OF THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM;
(III) THE INTENDED BENEFITS AND USES OF THE HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM; AND
(IV) RELEVANT INFORMATION CONCERNING THE MITIGATION OF
ALGORITHMIC DISCRIMINATION AND EXPLAINABILITY OF THE HIGH-RISK
ARTIFICIAL INTELLIGENCE SYSTEM;
(c) DOCUMENTATION DESCRIBING:
(I) THE TYPE OF DATA USED TO TRAIN THE HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM;
(II) HOW THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM WAS
EVALUATED FOR PERFORMANCE BEFORE THE HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM WAS OFFERED, SOLD, LEASED, LICENSED, GIVEN, OR
OTHERWISE MADE AVAILABLE TO THE DEPLOYER;
(III) THE DATA GOVERNANCE MEASURES USED TO COVER THE TRAINING
DATASETS AND THE MEASURES USED TO EXAMINE THE SUITABILITY OF DATA
SOURCES, POSSIBLE BIASES, AND APPROPRIATE MITIGATION;
(IV) THE INTENDED OUTPUTS OF THE HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM;
(V) THE MEASURES THE DEVELOPER HAS TAKEN TO MITIGATE KNOWN
OR REASONABLY FORESEEABLE RISKS OF ALGORITHMIC DISCRIMINATION THAT
MAY ARISE FROM THE DEPLOYMENT OF THE HIGH-RISK ARTIFICIAL INTELLIGENCE
SYSTEM; AND
(VI) HOW THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM SHOULD BE
USED OR MONITORED BY AN INDIVIDUAL WHEN THE HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM IS USED TO MAKE, OR IS A SUBSTANTIAL FACTOR IN
MAKING, A CONSEQUENTIAL DECISION; AND
(d) ANY ADDITIONAL DOCUMENTATION THAT IS REASONABLY
NECESSARY TO ASSIST THE DEPLOYER IN UNDERSTANDING THE OUTPUTS AND
MONITOR THE PERFORMANCE OF THE HIGH-RISK ARTIFICIAL INTELLIGENCE
SYSTEM FOR RISKS OF ALGORITHMIC DISCRIMINATION.
(3) EXCEPT AS PROVIDED IN SUBSECTION (6) OF THIS SECTION, A
DEVELOPER THAT OFFERS, SELLS, LEASES, LICENSES, GIVES, OR OTHERWISE
MAKES AVAILABLE TO A DEPLOYER A HIGH-RISK ARTIFICIAL INTELLIGENCE
SYSTEM ON OR AFTER OCTOBER 1, 2025, SHALL MAKE AVAILABLE TO THE
DEPLOYER, TO THE EXTENT FEASIBLE, THE DOCUMENTATION AND INFORMATION,
THROUGH ARTIFACTS SUCH AS MODEL CARDS, DATASET CARDS, OR OTHER
IMPACT ASSESSMENTS, NECESSARY FOR THE DEPLOYER, OR FOR A THIRD PARTY
CONTRACTED BY THE DEPLOYER, TO COMPLETE AN IMPACT ASSESSMENT
PURSUANT TO SECTION 6-1-1603 (3).
(4) (a) ON AND AFTER OCTOBER 1, 2025, A DEVELOPER SHALL MAKE
AVAILABLE, IN A MANNER THAT IS CLEAR AND READILY AVAILABLE FOR PUBLIC
INSPECTION ON THE DEVELOPER'S WEBSITE OR IN A PUBLIC USE CASE
INVENTORY, A STATEMENT SUMMARIZING:
(I) THE TYPES OF HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEMS THAT
THE DEVELOPER HAS DEVELOPED OR INTENTIONALLY AND SUBSTANTIALLY
MODIFIED AND CURRENTLY MAKES AVAILABLE TO A DEPLOYER; AND
(II) HOW THE DEVELOPER MANAGES KNOWN OR REASONABLY
FORESEEABLE RISKS OF ALGORITHMIC DISCRIMINATION THAT MAY ARISE FROM
THE DEVELOPMENT OR INTENTIONAL AND SUBSTANTIAL MODIFICATION OF THE
TYPES OF HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEMS DESCRIBED IN
SUBSECTION (4)(a)(I) OF THIS SECTION.
(b) A DEVELOPER SHALL UPDATE THE STATEMENT DESCRIBED IN
SUBSECTION (4)(a) OF THIS SECTION:
(I) AS NECESSARY TO ENSURE THAT THE STATEMENT REMAINS
ACCURATE; AND
(II) NO LATER THAN NINETY DAYS AFTER THE DEVELOPER
INTENTIONALLY AND SUBSTANTIALLY MODIFIES ANY HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM DESCRIBED IN SUBSECTION (4)(a)(I) OF THIS SECTION.
(5) ON AND AFTER OCTOBER 1, 2025, A DEVELOPER OF A HIGH-RISK
ARTIFICIAL INTELLIGENCE SYSTEM SHALL DISCLOSE TO THE ATTORNEY
GENERAL, IN A FORM AND MANNER PRESCRIBED BY THE ATTORNEY GENERAL,
AND TO ALL KNOWN DEPLOYERS OF THE HIGH-RISK ARTIFICIAL INTELLIGENCE
SYSTEM ANY KNOWN RISKS OF ALGORITHMIC DISCRIMINATION ARISING FROM
THE INTENDED USES OF THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM
WITHOUT UNREASONABLE DELAY BUT NO LATER THAN NINETY DAYS AFTER THE
DATE ON WHICH:
(a) THE DEVELOPER DISCOVERS THROUGH THE DEVELOPER'S ONGOING
TESTING AND ANALYSIS THAT THE DEVELOPER'S HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM HAS BEEN DEPLOYED AND HAS CAUSED ALGORITHMIC
DISCRIMINATION; OR
(b) THE DEVELOPER RECEIVES FROM A DEPLOYER A CREDIBLE REPORT
THAT THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM HAS BEEN DEPLOYED
AND HAS CAUSED ALGORITHMIC DISCRIMINATION.
(6) NOTHING IN SUBSECTIONS (2) TO (5) OF THIS SECTION REQUIRES A
DEVELOPER TO DISCLOSE A TRADE SECRET OR OTHER CONFIDENTIAL OR
PROPRIETARY INFORMATION.
(7) ON AND AFTER OCTOBER 1, 2025, THE ATTORNEY GENERAL MAY
REQUIRE THAT A DEVELOPER DISCLOSE TO THE ATTORNEY GENERAL, IN A FORM
AND MANNER PRESCRIBED BY THE ATTORNEY GENERAL, THE STATEMENT OR
DOCUMENTATION DESCRIBED IN SUBSECTION (2) OF THIS SECTION IF THE
STATEMENT OR DOCUMENTATION IS RELEVANT TO AN INVESTIGATION
CONDUCTED BY THE ATTORNEY GENERAL. THE ATTORNEY GENERAL MAY
EVALUATE SUCH STATEMENT OR DOCUMENTATION TO ENSURE COMPLIANCE
WITH THIS PART 16, AND THE STATEMENT OR DOCUMENTATION IS NOT SUBJECT
TO DISCLOSURE UNDER THE "COLORADO OPEN RECORDS ACT", PART 2 OF
ARTICLE 72 OF TITLE 24. TO THE EXTENT THAT ANY INFORMATION CONTAINED
IN THE STATEMENT OR DOCUMENTATION INCLUDES INFORMATION SUBJECT TO
ATTORNEY-CLIENT PRIVILEGE OR WORK-PRODUCT PROTECTION, THE
DISCLOSURE DOES NOT CONSTITUTE A WAIVER OF THE PRIVILEGE OR
PROTECTION.
6-1-1603. Deployer duty to avoid algorithmic discrimination - risk
management policy and program. (1) ON AND AFTER OCTOBER 1, 2025, A
DEPLOYER OF A HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM SHALL USE
REASONABLE CARE TO PROTECT CONSUMERS FROM ANY KNOWN OR
REASONABLY FORESEEABLE RISKS OF ALGORITHMIC DISCRIMINATION. IN ANY
ENFORCEMENT ACTION BROUGHT ON OR AFTER OCTOBER 1, 2025, BY THE
ATTORNEY GENERAL OR A DISTRICT ATTORNEY PURSUANT TO SECTION
6-1-1608, THERE IS A REBUTTABLE PRESUMPTION THAT A DEPLOYER OF A
HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM USED REASONABLE CARE AS
REQUIRED UNDER THIS SECTION IF THE DEPLOYER COMPLIED WITH THIS SECTION.
(2) (a) ON AND AFTER OCTOBER 1, 2025, AND EXCEPT AS PROVIDED IN
SUBSECTION (7) OF THIS SECTION, A DEPLOYER OF A HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM SHALL IMPLEMENT A RISK MANAGEMENT POLICY AND
PROGRAM TO GOVERN THE DEPLOYER'S DEPLOYMENT OF THE HIGH-RISK
ARTIFICIAL INTELLIGENCE SYSTEM. THE RISK MANAGEMENT POLICY AND
PROGRAM MUST SPECIFY AND INCORPORATE THE PRINCIPLES, PROCESSES, AND
PERSONNEL THAT THE DEPLOYER USES TO IDENTIFY, DOCUMENT, AND MITIGATE
KNOWN OR REASONABLY FORESEEABLE RISKS OF ALGORITHMIC
DISCRIMINATION. THE RISK MANAGEMENT POLICY AND PROGRAM MUST BE AN
ITERATIVE PROCESS PLANNED AND RUN THROUGHOUT THE ENTIRE LIFE CYCLE
OF A HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM, REQUIRING REGULAR,
SYSTEMATIC REVIEW AND UPDATES. A RISK MANAGEMENT POLICY AND
PROGRAM IMPLEMENTED AND MAINTAINED PURSUANT TO THIS SUBSECTION (2)
MUST BE REASONABLE CONSIDERING:
(I) (A) THE GUIDANCE AND STANDARDS SET FORTH IN THE LATEST
VERSION OF THE "ARTIFICIAL INTELLIGENCE RISK MANAGEMENT FRAMEWORK"
PUBLISHED BY THE NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY IN
THE UNITED STATES DEPARTMENT OF COMMERCE, STANDARD ISO/IEC 42001
OF THE INTERNATIONAL ORGANIZATION FOR STANDARDIZATION, OR ANOTHER
NATIONALLY OR INTERNATIONALLY RECOGNIZED RISK MANAGEMENT
FRAMEWORK FOR ARTIFICIAL INTELLIGENCE SYSTEMS; OR
(B) ANY RISK MANAGEMENT FRAMEWORK FOR ARTIFICIAL
INTELLIGENCE SYSTEMS THAT THE ATTORNEY GENERAL, IN THE ATTORNEY
GENERAL'S DISCRETION, MAY DESIGNATE;
(II) THE SIZE AND COMPLEXITY OF THE DEPLOYER;
(III) THE NATURE AND SCOPE OF THE HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEMS DEPLOYED BY THE DEPLOYER, INCLUDING THE
INTENDED USES OF THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEMS; AND
(IV) THE SENSITIVITY AND VOLUME OF DATA PROCESSED IN
CONNECTION WITH THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEMS
DEPLOYED BY THE DEPLOYER.
(b) A RISK MANAGEMENT POLICY AND PROGRAM IMPLEMENTED
PURSUANT TO SUBSECTION (2)(a) OF THIS SECTION MAY COVER MULTIPLE
HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEMS DEPLOYED BY THE DEPLOYER.
(3) (a) EXCEPT AS PROVIDED IN SUBSECTIONS (3)(d) AND (3)(e) OF THIS
SECTION:
(I) A DEPLOYER, OR A THIRD PARTY CONTRACTED BY THE DEPLOYER,
THAT DEPLOYS A HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM ON OR AFTER
OCTOBER 1, 2025, SHALL COMPLETE AN IMPACT ASSESSMENT FOR THE
HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM; AND
(II) ON AND AFTER OCTOBER 1, 2025, A DEPLOYER, OR A THIRD PARTY
CONTRACTED BY THE DEPLOYER, SHALL COMPLETE AN IMPACT ASSESSMENT FOR
A DEPLOYED HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM AT LEAST ANNUALLY
AND WITHIN NINETY DAYS AFTER ANY INTENTIONAL AND SUBSTANTIAL
MODIFICATION TO THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM IS MADE
AVAILABLE.
(b) AN IMPACT ASSESSMENT COMPLETED PURSUANT TO THIS
SUBSECTION (3) MUST INCLUDE, AT A MINIMUM:
(I) A STATEMENT BY THE DEPLOYER DISCLOSING THE PURPOSE,
INTENDED USE CASES, AND DEPLOYMENT CONTEXT OF, AND BENEFITS AFFORDED
BY, THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM;
(II) AN ANALYSIS OF WHETHER THE DEPLOYMENT OF THE HIGH-RISK
ARTIFICIAL INTELLIGENCE SYSTEM POSES ANY KNOWN OR REASONABLY
FORESEEABLE RISKS OF ALGORITHMIC DISCRIMINATION AND, IF SO, THE NATURE
OF THE ALGORITHMIC DISCRIMINATION AND THE STEPS THAT HAVE BEEN TAKEN
TO MITIGATE THE RISKS;
(III) A DESCRIPTION OF THE CATEGORIES OF DATA THE HIGH-RISK
ARTIFICIAL INTELLIGENCE SYSTEM PROCESSES AS INPUTS AND THE OUTPUTS THE
HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM PRODUCES;
(IV) IF THE DEPLOYER USED DATA TO CUSTOMIZE THE HIGH-RISK
ARTIFICIAL INTELLIGENCE SYSTEM, AN OVERVIEW OF THE CATEGORIES OF DATA
THE DEPLOYER USED TO CUSTOMIZE THE HIGH-RISK ARTIFICIAL INTELLIGENCE
SYSTEM;
(V) ANY METRICS USED TO EVALUATE THE PERFORMANCE AND KNOWN
LIMITATIONS OF THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM;
(VI) A DESCRIPTION OF ANY TRANSPARENCY MEASURES TAKEN
CONCERNING THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM, INCLUDING ANY
MEASURES TAKEN TO DISCLOSE TO A CONSUMER THAT THE HIGH-RISK
ARTIFICIAL INTELLIGENCE SYSTEM IS IN USE WHEN THE HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM IS IN USE; AND
(VII) A DESCRIPTION OF THE POST-DEPLOYMENT MONITORING AND USER
SAFEGUARDS PROVIDED CONCERNING THE HIGH-RISK ARTIFICIAL INTELLIGENCE
SYSTEM, INCLUDING THE OVERSIGHT PROCESS ESTABLISHED BY THE DEPLOYER
TO ADDRESS ISSUES ARISING FROM THE DEPLOYMENT OF THE HIGH-RISK
ARTIFICIAL INTELLIGENCE SYSTEM.
(c) IN ADDITION TO THE INFORMATION REQUIRED UNDER SUBSECTION
(3)(b) OF THIS SECTION, AN IMPACT ASSESSMENT COMPLETED PURSUANT TO THIS
SUBSECTION (3) FOLLOWING AN INTENTIONAL AND SUBSTANTIAL MODIFICATION
TO A HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM ON OR AFTER OCTOBER 1,
2025, MUST INCLUDE A STATEMENT DISCLOSING THE EXTENT TO WHICH THE
HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM WAS USED IN A MANNER THAT WAS
CONSISTENT WITH, OR VARIED FROM, THE DEVELOPER'S INTENDED USES OF THE
HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM.
(d) A SINGLE IMPACT ASSESSMENT MAY ADDRESS A COMPARABLE SET
OF HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEMS DEPLOYED BY A DEPLOYER.
(e) IF A DEPLOYER, OR A THIRD PARTY CONTRACTED BY THE DEPLOYER,
COMPLETES AN IMPACT ASSESSMENT FOR THE PURPOSE OF COMPLYING WITH
ANOTHER APPLICABLE LAW OR REGULATION, THE IMPACT ASSESSMENT
SATISFIES THE REQUIREMENTS ESTABLISHED IN THIS SUBSECTION (3) IF THE
IMPACT ASSESSMENT IS REASONABLY SIMILAR IN SCOPE AND EFFECT TO THE
IMPACT ASSESSMENT THAT WOULD OTHERWISE BE COMPLETED PURSUANT TO
THIS SUBSECTION (3).
(f) A DEPLOYER SHALL MAINTAIN THE MOST RECENTLY COMPLETED
IMPACT ASSESSMENT FOR A HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM AS
REQUIRED UNDER THIS SUBSECTION (3), ALL RECORDS CONCERNING EACH
IMPACT ASSESSMENT, AND ALL PRIOR IMPACT ASSESSMENTS, IF ANY, FOR AT
LEAST THREE YEARS FOLLOWING THE FINAL DEPLOYMENT OF THE HIGH-RISK
ARTIFICIAL INTELLIGENCE SYSTEM.
(g) ON OR BEFORE OCTOBER 1, 2025, AND AT LEAST ANNUALLY
THEREAFTER, A DEPLOYER, OR A THIRD PARTY CONTRACTED BY THE DEPLOYER,
MUST REVIEW THE DEPLOYMENT OF EACH HIGH-RISK ARTIFICIAL INTELLIGENCE
SYSTEM DEPLOYED BY THE DEPLOYER TO ENSURE THAT THE HIGH-RISK
ARTIFICIAL INTELLIGENCE SYSTEM IS NOT CAUSING ALGORITHMIC
DISCRIMINATION.
(4) (a) ON AND AFTER OCTOBER 1, 2025, AND NO LATER THAN THE TIME
THAT A DEPLOYER DEPLOYS A HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM TO
MAKE, OR BE A SUBSTANTIAL FACTOR IN MAKING, A CONSEQUENTIAL DECISION
CONCERNING A CONSUMER, THE DEPLOYER SHALL:
(I) NOTIFY THE CONSUMER THAT THE DEPLOYER HAS DEPLOYED A
HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM TO MAKE, OR BE A SUBSTANTIAL
FACTOR IN MAKING, THE CONSEQUENTIAL DECISION;
(II) PROVIDE TO THE CONSUMER A STATEMENT DISCLOSING THE
PURPOSE OF THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM AND THE NATURE
OF THE CONSEQUENTIAL DECISION, THE CONTACT INFORMATION FOR THE
DEPLOYER, AND A DESCRIPTION, IN PLAIN LANGUAGE, OF THE HIGH-RISK
ARTIFICIAL INTELLIGENCE SYSTEM, INCLUDING A DESCRIPTION OF ANY HUMAN
COMPONENTS OF THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM AND A
DESCRIPTION OF:
(A) THE PERSONAL ATTRIBUTES OR CHARACTERISTICS THAT THE
HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM ASSESSES OR MEASURES, THE
METHOD BY WHICH THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM ASSESSES
OR MEASURES THE ATTRIBUTES OR CHARACTERISTICS, AND WHY THE
ATTRIBUTES OR CHARACTERISTICS ARE RELEVANT TO THE CONSEQUENTIAL
DECISION;
(B) THE OUTPUTS OF THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM;
(C) THE LOGIC USED BY THE HIGH-RISK ARTIFICIAL INTELLIGENCE
SYSTEM, INCLUDING THE KEY PARAMETERS THAT AFFECT THE OUTPUTS OF THE
HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM;
(D) THE SOURCES OF DATA USED BY THE HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM;
(E) THE SOURCES AND TYPES OF DATA COLLECTED FROM CONSUMERS
AND PROCESSED BY THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM WHEN
THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM IS USED TO MAKE, OR IS A
SUBSTANTIAL FACTOR IN MAKING, A CONSEQUENTIAL DECISION;
(F) THE RESULTS OF THE IMPACT ASSESSMENT MOST RECENTLY
COMPLETED FOR THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM PURSUANT
TO SUBSECTION (3) OF THIS SECTION OR AN ACTIVE LINK TO A WEBSITE WHERE
THE CONSUMER MAY REVIEW THE RESULTS;
(G) ANY HUMAN COMPONENTS OF THE HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM; AND
(H) HOW THE AUTOMATED COMPONENTS OF THE HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM ARE USED TO INFORM THE CONSEQUENTIAL DECISION;
AND
(III) PROVIDE TO THE CONSUMER INFORMATION, IF APPLICABLE,
REGARDING THE CONSUMER'S RIGHT TO OPT OUT OF THE PROCESSING OF
PERSONAL DATA CONCERNING THE CONSUMER FOR PURPOSES OF PROFILING IN
FURTHERANCE OF DECISIONS THAT PRODUCE LEGAL OR SIMILARLY SIGNIFICANT
EFFECTS CONCERNING THE CONSUMER UNDER SECTION 6-1-1306 (1)(a)(I)(C).
(b) A DEPLOYER SHALL PROVIDE THE CONSUMER WITH AN OPPORTUNITY
TO APPEAL AN ADVERSE CONSEQUENTIAL DECISION CONCERNING THE
CONSUMER ARISING FROM THE DEPLOYMENT OF A HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM, WHICH APPEAL MUST, IF TECHNICALLY FEASIBLE, ALLOW
FOR HUMAN REVIEW.
(c) (I) EXCEPT AS PROVIDED IN SUBSECTION (4)(c)(II) OF THIS SECTION,
A DEPLOYER SHALL PROVIDE THE NOTICE, STATEMENT, CONTACT INFORMATION,
AND DESCRIPTION REQUIRED BY SUBSECTION (4)(a) OF THIS SECTION:
(A) DIRECTLY TO THE CONSUMER;
(B) IN PLAIN LANGUAGE;
(C) IN ALL LANGUAGES IN WHICH THE DEPLOYER, IN THE ORDINARY
COURSE OF THE DEPLOYER'S BUSINESS, PROVIDES CONTRACTS, DISCLAIMERS,
SALE ANNOUNCEMENTS, AND OTHER INFORMATION TO CONSUMERS; AND
(D) IN A FORMAT THAT IS ACCESSIBLE TO CONSUMERS WITH
DISABILITIES.
(II) IF THE DEPLOYER IS UNABLE TO PROVIDE THE NOTICE, STATEMENT,
CONTACT INFORMATION, AND DESCRIPTION REQUIRED BY SUBSECTION (4)(a) OF
THIS SECTION DIRECTLY TO THE CONSUMER, THE DEPLOYER SHALL MAKE THE
NOTICE, STATEMENT, CONTACT INFORMATION, AND DESCRIPTION AVAILABLE IN
A MANNER THAT IS REASONABLY CALCULATED TO ENSURE THAT THE CONSUMER
RECEIVES THE NOTICE, STATEMENT, CONTACT INFORMATION, AND DESCRIPTION.
(5) (a) ON AND AFTER OCTOBER 1, 2025, A DEPLOYER SHALL MAKE
AVAILABLE, IN A MANNER THAT IS CLEAR AND READILY AVAILABLE FOR PUBLIC
INSPECTION, A STATEMENT SUMMARIZING:
(I) THE TYPES OF HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEMS THAT
ARE CURRENTLY DEPLOYED BY THE DEPLOYER;
(II) HOW THE DEPLOYER MANAGES KNOWN OR REASONABLY
FORESEEABLE RISKS OF ALGORITHMIC DISCRIMINATION THAT MAY ARISE FROM
THE DEPLOYMENT OF EACH HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM
DESCRIBED PURSUANT TO SUBSECTION (5)(a)(I) OF THIS SECTION; AND
(III) IN DETAIL, THE NATURE, SOURCE, AND EXTENT OF THE
INFORMATION COLLECTED AND USED BY THE DEPLOYER.
(b) A DEPLOYER SHALL PERIODICALLY UPDATE THE STATEMENT
DESCRIBED IN SUBSECTION (5)(a) OF THIS SECTION.
(6) IF A DEPLOYER DEPLOYS A HIGH-RISK ARTIFICIAL INTELLIGENCE
SYSTEM ON OR AFTER OCTOBER 1, 2025, AND SUBSEQUENTLY DISCOVERS THAT
THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM HAS CAUSED ALGORITHMIC
DISCRIMINATION AGAINST A CONSUMER, THE DEPLOYER, WITHOUT
UNREASONABLE DELAY, BUT NO LATER THAN NINETY DAYS AFTER THE DATE OF
THE DISCOVERY, SHALL SEND TO THE ATTORNEY GENERAL, IN A FORM AND
MANNER PRESCRIBED BY THE ATTORNEY GENERAL, A NOTICE DISCLOSING THE
DISCOVERY.
(7) NOTHING IN SUBSECTIONS (2) TO (6) OF THIS SECTION REQUIRES A
DEPLOYER TO DISCLOSE A TRADE SECRET OR OTHER CONFIDENTIAL OR
PROPRIETARY INFORMATION.
(8) ON AND AFTER OCTOBER 1, 2025, THE ATTORNEY GENERAL MAY
REQUIRE THAT A DEPLOYER , OR A THIRD PARTY CONTRACTED BY THE
DEPLOYER, DISCLOSE TO THE ATTORNEY GENERAL, IN A FORM AND MANNER
PRESCRIBED BY THE ATTORNEY GENERAL, THE RISK MANAGEMENT POLICY
IMPLEMENTED PURSUANT TO SUBSECTION (2) OF THIS SECTION, IMPACT
ASSESSMENT COMPLETED PURSUANT TO SUBSECTION (3) OF THIS SECTION, OR
RECORDS MAINTAINED PURSUANT TO SUBSECTION (3)(f) OF THIS SECTION IF THE
RISK MANAGEMENT POLICY, IMPACT ASSESSMENT, OR RECORDS ARE RELEVANT
TO AN INVESTIGATION CONDUCTED BY THE ATTORNEY GENERAL. THE
ATTORNEY GENERAL MAY EVALUATE THE RISK MANAGEMENT POLICY, IMPACT
ASSESSMENT, OR RECORDS TO ENSURE COMPLIANCE WITH THIS PART 16, AND
THE RISK MANAGEMENT POLICY, IMPACT ASSESSMENT, AND RECORDS ARE NOT
SUBJECT TO DISCLOSURE UNDER THE "COLORADO OPEN RECORDS ACT", PART
2 OF ARTICLE 72 OF TITLE 24. TO THE EXTENT THAT ANY INFORMATION
CONTAINED IN THE RISK MANAGEMENT POLICY, IMPACT ASSESSMENT, OR
RECORDS INCLUDE INFORMATION SUBJECT TO ATTORNEY-CLIENT PRIVILEGE OR
WORK-PRODUCT PROTECTION, THE DISCLOSURE DOES NOT CONSTITUTE A
WAIVER OF THE PRIVILEGE OR PROTECTION.
6-1-1604. General purpose artificial intelligence model - developer
documentation requirements - copyright policy - exceptions - rules. (1) ON
AND AFTER JANUARY 1, 2026, A DEVELOPER OF A GENERAL PURPOSE ARTIFICIAL
INTELLIGENCE MODEL SHALL:
(a) EXCEPT AS PROVIDED IN SUBSECTION (2)(a) OF THIS SECTION,
CREATE AND MAINTAIN TECHNICAL DOCUMENTATION FOR THE GENERAL
PURPOSE ARTIFICIAL INTELLIGENCE MODEL, WHICH DOCUMENTATION MUST:
(I) INCLUDE:
(A) THE TRAINING AND TESTING PROCESSES FOR THE GENERAL PURPOSE
ARTIFICIAL INTELLIGENCE MODEL; AND
(B) THE RESULTS OF AN EVALUATION OF THE GENERAL PURPOSE
ARTIFICIAL INTELLIGENCE MODEL TO DETERMINE WHETHER THE GENERAL
PURPOSE ARTIFICIAL INTELLIGENCE MODEL IS IN COMPLIANCE WITH SECTIONS
6-1-1601 TO 6-1-1607;
(II) INCLUDE AT LEAST THE FOLLOWING INFORMATION, AS
APPROPRIATE, CONSIDERING THE SIZE AND RISK PROFILE OF THE GENERAL
PURPOSE ARTIFICIAL INTELLIGENCE MODEL:
(A) THE TASKS THE GENERAL PURPOSE ARTIFICIAL INTELLIGENCE
MODEL IS INTENDED TO PERFORM;
(B) THE TYPE AND NATURE OF ARTIFICIAL INTELLIGENCE SYSTEMS INTO
WHICH THE GENERAL PURPOSE ARTIFICIAL INTELLIGENCE MODEL IS INTENDED
TO BE INTEGRATED;
(C) ACCEPTABLE USE POLICIES FOR THE GENERAL PURPOSE ARTIFICIAL
INTELLIGENCE MODEL;
(D) THE DATE THE GENERAL PURPOSE ARTIFICIAL INTELLIGENCE MODEL
IS RELEASED;
(E) THE METHODS BY WHICH THE GENERAL PURPOSE ARTIFICIAL
INTELLIGENCE MODEL IS DISTRIBUTED;
(F) THE MODALITY AND FORMAT OF INPUTS AND OUTPUTS FOR THE
GENERAL PURPOSE ARTIFICIAL INTELLIGENCE MODEL; AND
(G) A DESCRIPTION OF THE DATA THAT WAS USED FOR PURPOSES OF
TRAINING, TESTING, AND VALIDATION, WHERE APPLICABLE, INCLUDING THE
TYPE AND PROVENANCE OF THE DATA, DATA CURATION METHODOLOGIES, HOW
THE DATA WAS OBTAINED AND SELECTED, ALL OTHER MEASURES USED TO
IDENTIFY UNSUITABLE DATA SOURCES, AND METHODS USED TO DETECT
IDENTIFIABLE BIASES, WHERE APPLICABLE; AND
(b) CREATE, IMPLEMENT, MAINTAIN, AND MAKE AVAILABLE TO A
PERSON THAT INTENDS TO INTEGRATE THE GENERAL PURPOSE ARTIFICIAL
INTELLIGENCE MODEL INTO THE PERSON'S ARTIFICIAL INTELLIGENCE SYSTEMS
DOCUMENTATION AND INFORMATION THAT:
(I) ENABLES THE PERSON TO:
(A) UNDERSTAND THE CAPABILITIES AND LIMITATIONS OF THE GENERAL
PURPOSE ARTIFICIAL INTELLIGENCE MODEL; AND
(B) COMPLY WITH THE PERSON'S OBLIGATIONS UNDER THIS PART 16;
(II) DISCLOSES, AT A MINIMUM:
(A) THE TECHNICAL REQUIREMENTS FOR THE GENERAL PURPOSE
ARTIFICIAL INTELLIGENCE MODEL TO BE INTEGRATED INTO THE PERSON'S
ARTIFICIAL INTELLIGENCE SYSTEMS; AND
(B) THE INFORMATION REQUIRED BY THIS SUBSECTION (1)(b);
(c) EXCEPT AS PROVIDED IN SUBSECTION (2)(a) OF THIS SECTION,
REVIEW AND REVISE THE TECHNICAL DOCUMENTATION FOR THE GENERAL
PURPOSE ARTIFICIAL INTELLIGENCE MODEL CREATED PURSUANT TO
SUBSECTIONS (1)(a) AND (1)(b) OF THIS SECTION AT LEAST ANNUALLY OR MORE
FREQUENTLY AS NECESSARY TO MAINTAIN THE ACCURACY OF THE TECHNICAL
DOCUMENTATION;
(d) EXCEPT AS PROVIDED IN SUBSECTION (2)(a) OF THIS SECTION,
ESTABLISH, IMPLEMENT, AND MAINTAIN A POLICY TO COMPLY WITH FEDERAL
AND STATE COPYRIGHT LAWS; AND
(e) EXCEPT AS PROVIDED IN SUBSECTION (2)(a) OF THIS SECTION,
CREATE, MAINTAIN, AND MAKE PUBLICLY AVAILABLE, IN A FORM AND MANNER
PRESCRIBED BY THE ATTORNEY GENERAL, A DETAILED SUMMARY CONCERNING
THE CONTENT USED TO TRAIN THE GENERAL PURPOSE ARTIFICIAL INTELLIGENCE
MODEL.
(2) (a) SUBSECTION (1) OF THIS SECTION DOES NOT APPLY TO A
DEVELOPER THAT DEVELOPS OR INTENTIONALLY AND SUBSTANTIALLY MODIFIES
A GENERAL PURPOSE ARTIFICIAL INTELLIGENCE MODEL ON OR AFTER JANUARY
1, 2026, IF:
(I) THE DEVELOPER RELEASES THE GENERAL PURPOSE ARTIFICIAL
INTELLIGENCE MODEL UNDER A FREE AND OPEN-SOURCE LICENSE THAT ALLOWS
FOR:
(A) ACCESS TO, AND MODIFICATION, DISTRIBUTION, AND USAGE OF, THE
GENERAL PURPOSE ARTIFICIAL INTELLIGENCE MODEL; AND
(B) THE PARAMETERS OF THE GENERAL PURPOSE ARTIFICIAL
INTELLIGENCE MODEL TO BE MADE AVAILABLE AS SET FORTH IN SUBSECTION
(2)(a)(II) OF THIS SECTION; AND
(II) UNLESS THE GENERAL PURPOSE ARTIFICIAL INTELLIGENCE MODEL
IS DEPLOYED AS A HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM, THE
PARAMETERS OF THE GENERAL PURPOSE ARTIFICIAL INTELLIGENCE MODEL,
INCLUDING THE WEIGHTS AND INFORMATION CONCERNING THE MODEL
ARCHITECTURE AND MODEL USAGE FOR THE GENERAL PURPOSE ARTIFICIAL
INTELLIGENCE MODEL, ARE MADE PUBLICLY AVAILABLE.
(b) A DEVELOPER THAT ACTS UNDER THE EXEMPTION ESTABLISHED IN
SUBSECTION (2)(a) OF THIS SECTION BEARS THE BURDEN OF DEMONSTRATING
THAT THE ACTION QUALIFIES FOR SUCH EXEMPTION.
(3) NOTHING IN SUBSECTION (1) OF THIS SECTION REQUIRES A
DEVELOPER TO DISCLOSE A TRADE SECRET OR OTHER CONFIDENTIAL OR
PROPRIETARY INFORMATION.
(4) ON AND AFTER JANUARY 1, 2026, THE ATTORNEY GENERAL MAY
REQUIRE THAT A DEVELOPER OF A GENERAL PURPOSE ARTIFICIAL INTELLIGENCE
MODEL DISCLOSE TO THE ATTORNEY GENERAL, IN A FORM AND MANNER
PRESCRIBED BY THE ATTORNEY GENERAL, ANY DOCUMENTATION MAINTAINED
PURSUANT TO THIS SECTION IF THE DOCUMENTATION IS RELEVANT TO AN
INVESTIGATION CONDUCTED BY THE ATTORNEY GENERAL. THE ATTORNEY
GENERAL MAY EVALUATE THE DOCUMENTATION TO ENSURE COMPLIANCE WITH
THIS SECTION AND ANY RULES ADOPTED PURSUANT TO SECTION 6-1-1609, AND
THE DOCUMENTATION IS NOT SUBJECT TO DISCLOSURE UNDER THE "COLORADO
OPEN RECORDS ACT", PART 2 OF ARTICLE 72 OF TITLE 24. TO THE EXTENT THAT
THE DOCUMENTATION INCLUDES INFORMATION SUBJECT TO ATTORNEY-CLIENT
PRIVILEGE OR WORK-PRODUCT PROTECTION, THE DISCLOSURE DOES NOT
CONSTITUTE A WAIVER OF THE PRIVILEGE OR PROTECTION.
6-1-1605. Disclosure of a
Senate Journal, May 2
SB24-205 by Senator(s) Rodriguez; --Concerning consumer protections in interactions with artificial
intelligence systems.
Amendment No. 1, Judiciary Committee Amendment.
(Printed in Senate Journal, April 25, page(s) 1055-1066 and placed in members' bill files.)
Amendment No. 2(L.003), by Senator Rodriguez.
Amend printed bill, strike everything below the enacting clause and substitute:
"SECTION 1. In Colorado Revised Statutes, add part 16 to article 1
of title 6 as follows:
PART 16
ARTIFICIAL INTELLIGENCE
6-1-1601. Definitions. AS USED IN THIS PART 16, UNLESS THE CONTEXT
OTHERWISE REQUIRES:
(1) (a) "ALGORITHMIC DISCRIMINATION" MEANS ANY CONDITION IN
WHICH THE USE OF AN ARTIFICIAL INTELLIGENCE SYSTEM MATERIALLY
INCREASES THE RISK OF AN UNLAWFUL DIFFERENTIAL TREATMENT OR IMPACT
THAT DISFAVORS AN INDIVIDUAL OR GROUP OF INDIVIDUALS ON THE BASIS OF
THEIR ACTUAL OR PERCEIVED AGE, COLOR, DISABILITY, ETHNICITY, GENETIC
INFORMATION, LIMITED PROFICIENCY IN THE ENGLISH LANGUAGE, NATIONAL
ORIGIN, RACE, RELIGION, REPRODUCTIVE HEALTH, SEX, VETERAN STATUS, OR
OTHER CLASSIFICATION PROTECTED UNDER THE LAWS OF THIS STATE OR
FEDERAL LAW.
(b) "ALGORITHMIC DISCRIMINATION" DOES NOT INCLUDE:
(I) THE OFFER, LICENSE, OR USE OF A HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM BY A DEVELOPER OR DEPLOYER FOR THE SOLE PURPOSE
OF:
(A) THE DEVELOPER'S OR DEPLOYER'S SELF-TESTING TO IDENTIFY,
MITIGATE, OR PREVENT DISCRIMINATION OR OTHERWISE ENSURE COMPLIANCE
WITH STATE AND FEDERAL LAW; OR
(B) EXPANDING AN APPLICANT, CUSTOMER, OR PARTICIPANT POOL TO
INCREASE DIVERSITY OR REDRESS HISTORICAL DISCRIMINATION; OR
(II) AN ACT OR OMISSION BY OR ON BEHALF OF A PRIVATE CLUB OR
OTHER ESTABLISHMENT THAT I S NOT IN FACT OPEN TO THE PUBLIC, AS SET
FORTH IN TITLE II OF THE FEDERAL "CIVIL RIGHTS ACT OF 1964", 42 U.S.C. SEC.
2000a (e), AS AMENDED.
(2) "ARTIFICIAL INTELLIGENCE SYSTEM" MEANS ANY MACHINE-BASED
SYSTEM THAT, FOR ANY EXPLICIT OR IMPLICIT OBJECTIVE, INFERS FROM THE
INPUTS THE SYSTEM RECEIVES HOW TO GENERATE OUTPUTS, INCLUDING
CONTENT, DECISIONS, PREDICTIONS, OR RECOMMENDATIONS, THAT CAN
INFLUENCE PHYSICAL OR VIRTUAL ENVIRONMENTS.
(3) "CONSEQUENTIAL DECISION" MEANS A DECISION THAT HAS A
MATERIAL LEGAL OR SIMILARLY SIGNIFICANT EFFECT ON THE PROVISION OR
DENIAL TO ANY CONSUMER OF, OR THE COST OR TERMS OF:
(a) EDUCATION ENROLLMENT OR AN EDUCATION OPPORTUNITY;
(b) EMPLOYMENT OR AN EMPLOYMENT OPPORTUNITY;
(c) A FINANCIAL OR LENDING SERVICE;
(d) AN ESSENTIAL GOVERNMENT SERVICE;
(e) HEALTH-CARE SERVICES;
(f) HOUSING;
(g) INSURANCE; OR
(h) A LEGAL SERVICE.
(4) "CONSUMER" MEANS AN INDIVIDUAL WHO IS A COLORADO
RESIDENT.
(5) "DEPLOY" MEANS TO USE A HIGH-RISK ARTIFICIAL INTELLIGENCE
SYSTEM.
(6) "DEPLOYER" MEANS A PERSON DOING BUSINESS IN THIS STATE THAT
DEPLOYS A HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM.
(7) "DEVELOPER" MEANS A PERSON DOING BUSINESS IN THIS STATE
THAT DEVELOPS OR INTENTIONALLY AND SUBSTANTIALLY MODIFIES AN
ARTIFICIAL INTELLIGENCE SYSTEM.
(8) "HEALTH-CARE SERVICES" HAS THE SAME MEANING AS PROVIDED IN
42 U.S.C. SEC. 234 (d)(2).
(9) (a) "HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM" MEANS ANY
ARTIFICIAL INTELLIGENCE SYSTEM THAT, WHEN DEPLOYED, MAKES, OR IS A
SUBSTANTIAL FACTOR IN MAKING, A CONSEQUENTIAL DECISION.
(b) "HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM" DOES NOT INCLUDE:
(I) AN ARTIFICIAL INTELLIGENCE SYSTEM IF THE ARTIFICIAL
INTELLIGENCE SYSTEM IS INTENDED TO:
(A) PERFORM A NARROW PROCEDURAL TASK; OR
(B) DETECT DECISION-MAKING PATTERNS OR DEVIATIONS FROM PRIOR
DECISION-MAKING PATTERNS AND IS NOT INTENDED TO REPLACE OR INFLUENCE
A PREVIOUSLY COMPLETED HUMAN ASSESSMENT WITHOUT SUFFICIENT HUMAN
REVIEW; OR
(II) THE FOLLOWING TECHNOLOGIES, UNLESS THE TECHNOLOGIES, WHEN
DEPLOYED, MAKE, OR ARE A SUBSTANTIAL FACTOR IN MAKING, A
CONSEQUENTIAL DECISION:
(A) ANTI-FRAUD TECHNOLOGY THAT DOES NOT USE FACIAL
RECOGNITION TECHNOLOGY;
(B) ANTI-MALWARE;
(C) ANTI-VIRUS;
(D) ARTIFICIAL INTELLIGENCE-ENABLED VIDEO GAMES;
(E) CALCULATORS;
(F) CYBERSECURITY;
(G) DATABASES;
(H) DATA STORAGE;
(I) FIREWALL;
(J) INTERNET DOMAIN REGISTRATION;
(K) INTERNET WEBSITE LOADING;
(L) NETWORKING;
(M) SPAM- AND ROBOCALL-FILTERING;
(N) SPELL-CHECKING;
(O) SPREADSHEETS;
(P) WEB CACHING;
(Q) WEB HOSTING OR ANY SIMILAR TECHNOLOGY; OR
(R) TECHNOLOGY THAT COMMUNICATES IN NATURAL LANGUAGE FOR
THE PURPOSE OF PROVIDING USERS WITH INFORMATION, MAKING REFERRALS OR
RECOMMENDATIONS, AND ANSWERING QUESTIONS AND IS SUBJECT TO AN
ACCEPTED USE POLICY THAT PROHIBITS GENERATING CONTENT THAT IS
DISCRIMINATORY OR HARMFUL.
(10) (a) "INTENTIONAL AND SUBSTANTIAL MODIFICATION" OR
"INTENTIONALLY AND SUBSTANTIALLY MODIFIES" MEANS A DELIBERATE
CHANGE MADE TO AN ARTIFICIAL INTELLIGENCE SYSTEM THAT RESULTS IN ANY
NEW REASONABLY FORESEEABLE RISK OF ALGORITHMIC DISCRIMINATION.
(b) "INTENTIONAL AND SUBSTANTIAL MODIFICATION" OR
"INTENTIONALLY AND SUBSTANTIALLY MODIFIES" DOES NOT INCLUDE A CHANGE
MADE TO A HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM, OR THE PERFORMANCE
OF A HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM, IF:
(I) THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM CONTINUES TO
LEARN AFTER THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM IS:
(A) OFFERED, SOLD, LEASED, LICENSED, GIVEN, OR OTHERWISE MADE
AVAILABLE TO A DEPLOYER; OR
(B) DEPLOYED;
(II) THE CHANGE IS MADE TO THE HIGH-RISK ARTIFICIAL INTELLIGENCE
SYSTEM AS A RESULT OF ANY LEARNING DESCRIBED IN SUBSECTION (10)(b)(I)
OF THIS SECTION;
(III) THE CHANGE WAS PREDETERMINED BY THE DEPLOYER, OR A THIRD
PARTY CONTRACTED BY THE DEPLOYER, WHEN THE DEPLOYER OR THIRD PARTY
COMPLETED AN INITIAL IMPACT ASSESSMENT OF SUCH HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM PURSUANT TO SECTION 6-1-1603 (3); AND
(IV) THE CHANGE IS INCLUDED IN TECHNICAL DOCUMENTATION FOR
THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM.
(11) (a) "SUBSTANTIAL FACTOR" MEANS A FACTOR THAT:
(I) ASSISTS IN MAKING A CONSEQUENTIAL DECISION;
(II) IS CAPABLE OF ALTERING THE OUTCOME OF A CONSEQUENTIAL
DECISION; AND
(III) IS GENERATED BY AN ARTIFICIAL INTELLIGENCE SYSTEM.
(b) "SUBSTANTIAL FACTOR" INCLUDES ANY USE OF AN ARTIFICIAL
INTELLIGENCE SYSTEM TO GENERATE ANY CONTENT, DECISION, PREDICTION, OR
RECOMMENDATION CONCERNING A CONSUMER THAT IS USED AS A BASIS TO
MAKE A CONSEQUENTIAL DECISION CONCERNING THE CONSUMER.
(12) "TRADE SECRET" HAS THE MEANING SET FORTH IN SECTION
7-74-102 (4).
6-1-1602. Developer duty to avoid algorithmic discrimination -
required documentation. (1) ON AND AFTER FEBRUARY 1, 2026, A
DEVELOPER OF A HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM SHALL USE
REASONABLE CARE TO PROTECT CONSUMERS FROM ANY KNOWN OR
REASONABLY FORESEEABLE RISKS OF ALGORITHMIC DISCRIMINATION ARISING
FROM THE INTENDED AND CONTRACTED USES OF THE HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM. IN ANY ENFORCEMENT ACTION BROUGHT ON OR AFTER
FEBRUARY 1, 2026, BY THE ATTORNEY GENERAL PURSUANT TO SECTION
6-1-1606, THERE IS A REBUTTABLE PRESUMPTION THAT A DEVELOPER USED
REASONABLE CARE AS REQUIRED UNDER THIS SECTION IF THE DEVELOPER
COMPLIED WITH THIS SECTION AND ANY ADDITIONAL REQUIREMENTS OR
OBLIGATIONS AS SET FORTH IN RULES PROMULGATED BY THE ATTORNEY
GENERAL PURSUANT TO SECTION 6-1-1607.
(2) ON AND AFTER FEBRUARY 1, 2026, AND EXCEPT AS PROVIDED IN
SUBSECTION (6) OF THIS SECTION, A DEVELOPER OF A HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM SHALL MAKE AVAILABLE TO THE DEPLOYER OR OTHER
DEVELOPER OF THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM:
(a) A GENERAL STATEMENT DESCRIBING THE INTENDED USES OF THE
HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM;
(b) DOCUMENTATION DISCLOSING:
(I) HIGH-LEVEL SUMMARIES OF THE TYPE OF DATA USED TO TRAIN THE
HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM;
(II) KNOWN OR REASONABLY FORESEEABLE LIMITATIONS OF THE
HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM, INCLUDING KNOWN OR
REASONABLY FORESEEABLE RISKS OF ALGORITHMIC DISCRIMINATION ARISING
FROM THE INTENDED USES OF THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM;
(III) THE PURPOSE OF THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM;
AND
(IV) THE INTENDED BENEFITS AND USES OF THE HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM;
(c) DOCUMENTATION DESCRIBING:
(I) HOW THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM WAS
EVALUATED FOR PERFORMANCE AND MITIGATION OF ALGORITHMIC
DISCRIMINATION BEFORE THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM WAS
OFFERED, SOLD, LEASED, LICENSED, GIVEN, OR OTHERWISE MADE AVAILABLE TO
THE DEPLOYER;
(II) THE DATA GOVERNANCE MEASURES USED TO COVER THE TRAINING
DATASETS AND THE MEASURES USED TO EXAMINE THE SUITABILITY OF DATA
SOURCES, POSSIBLE BIASES, AND APPROPRIATE MITIGATION;
(III) THE INTENDED OUTPUTS OF THE HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM;
(IV) THE MEASURES THE DEVELOPER HAS TAKEN TO MITIGATE KNOWN
OR REASONABLY FORESEEABLE RISKS OF ALGORITHMIC DISCRIMINATION THAT
MAY ARISE FROM THE DEPLOYMENT OF THE HIGH-RISK ARTIFICIAL INTELLIGENCE
SYSTEM; AND
(V) HOW THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM SHOULD BE
USED OR MONITORED BY AN INDIVIDUAL WHEN THE HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM IS USED TO MAKE, OR IS A SUBSTANTIAL FACTOR IN
MAKING, A CONSEQUENTIAL DECISION; AND
(d) ANY ADDITIONAL DOCUMENTATION THAT IS REASONABLY
NECESSARY TO ASSIST THE DEPLOYER IN UNDERSTANDING THE OUTPUTS AND
MONITOR THE PERFORMANCE OF THE HIGH-RISK ARTIFICIAL INTELLIGENCE
SYSTEM FOR RISKS OF ALGORITHMIC DISCRIMINATION.
(3) (a) EXCEPT AS PROVIDED IN SUBSECTION (6) OF THIS SECTION, A
DEVELOPER THAT OFFERS, SELLS, LEASES, LICENSES, GIVES, OR OTHERWISE
MAKES AVAILABLE TO A DEPLOYER OR OTHER DEVELOPER A HIGH-RISK
ARTIFICIAL INTELLIGENCE SYSTEM ON OR AFTER FEBRUARY 1, 2026, SHALL
MAKE AVAILABLE TO THE DEPLOYER OR OTHER DEVELOPER, TO THE EXTENT
FEASIBLE, THE DOCUMENTATION AND INFORMATION, THROUGH ARTIFACTS SUCH
AS MODEL CARDS, DATASET CARDS, OR OTHER IMPACT ASSESSMENTS,
NECESSARY FOR A DEPLOYER, OR FOR A THIRD PARTY CONTRACTED BY A
DEPLOYER, TO COMPLETE AN IMPACT ASSESSMENT PURSUANT TO SECTION
6-1-1603 (3).
(b) A DEVELOPER THAT ALSO SERVES AS A DEPLOYER FOR A HIGH-RISK
ARTIFICIAL INTELLIGENCE SYSTEM IS NOT REQUIRED TO GENERATE THE
DOCUMENTATION REQUIRED BY THIS SECTION UNLESS THE HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM IS PROVIDED TO AN UNAFFILIATED ENTITY ACTING AS A
DEPLOYER.
(4) (a) ON AND AFTER FEBRUARY 1, 2026, A DEVELOPER SHALL MAKE
AVAILABLE, IN A MANNER THAT IS CLEAR AND READILY AVAILABLE ON THE
DEVELOPER'S WEBSITE OR IN A PUBLIC USE CASE INVENTORY, A STATEMENT
SUMMARIZING:
(I) THE TYPES OF HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEMS THAT
THE DEVELOPER HAS DEVELOPED OR INTENTIONALLY AND SUBSTANTIALLY
MODIFIED AND CURRENTLY MAKES AVAILABLE TO A DEPLOYER OR OTHER
DEVELOPER; AND
(II) HOW THE DEVELOPER MANAGES KNOWN OR REASONABLY
FORESEEABLE RISKS OF ALGORITHMIC DISCRIMINATION THAT MAY ARISE FROM
THE DEVELOPMENT OR INTENTIONAL AND SUBSTANTIAL MODIFICATION OF THE
TYPES OF HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEMS DESCRIBED IN
ACCORDANCE WITH SUBSECTION (4)(a)(I) OF THIS SECTION.
(b) A DEVELOPER SHALL UPDATE THE STATEMENT DESCRIBED IN
SUBSECTION (4)(a) OF THIS SECTION:
(I) AS NECESSARY TO ENSURE THAT THE STATEMENT REMAINS
ACCURATE; AND
(II) NO LATER THAN NINETY DAYS AFTER THE DEVELOPER
INTENTIONALLY AND SUBSTANTIALLY MODIFIES ANY HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM DESCRIBED IN SUBSECTION (4)(a)(I) OF THIS SECTION.
(5) ON AND AFTER FEBRUARY 1, 2026, A DEVELOPER OF A HIGH-RISK
ARTIFICIAL INTELLIGENCE SYSTEM SHALL DISCLOSE TO THE ATTORNEY
GENERAL, IN A FORM AND MANNER PRESCRIBED BY THE ATTORNEY GENERAL,
AND TO ALL KNOWN DEPLOYERS OR OTHER DEVELOPERS, OF THE HIGH-RISK
ARTIFICIAL INTELLIGENCE SYSTEM ANY KNOWN OR REASONABLY FORESEEABLE
RISKS OF ALGORITHMIC DISCRIMINATION ARISING FROM THE INTENDED USES OF
THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM WITHOUT UNREASONABLE
DELAY BUT NO LATER THAN NINETY DAYS AFTER THE DATE ON WHICH:
(a) THE DEVELOPER DISCOVERS THROUGH THE DEVELOPER'S ONGOING
TESTING AND ANALYSIS THAT THE DEVELOPER'S HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM HAS BEEN DEPLOYED AND HAS CAUSED OR IS
REASONABLY LIKELY TO HAVE CAUSED ALGORITHMIC DISCRIMINATION; OR
(b) THE DEVELOPER RECEIVES FROM A DEPLOYER A CREDIBLE REPORT
THAT THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM HAS BEEN DEPLOYED
AND HAS CAUSED ALGORITHMIC DISCRIMINATION.
(6) NOTHING IN SUBSECTIONS (2) TO (5) OF THIS SECTION REQUIRES A
DEVELOPER TO DISCLOSE A TRADE SECRET OR OTHER CONFIDENTIAL OR
PROPRIETARY INFORMATION.
(7) ON AND AFTER FEBRUARY 1, 2026, THE ATTORNEY GENERAL MAY
REQUIRE THAT A DEVELOPER DISCLOSE TO THE ATTORNEY GENERAL, IN A FORM
AND MANNER PRESCRIBED BY THE ATTORNEY GENERAL, THE STATEMENT OR
DOCUMENTATION DESCRIBED IN SUBSECTION (2) OF THIS SECTION. THE
ATTORNEY GENERAL MAY EVALUATE SUCH STATEMENT OR DOCUMENTATION TO
ENSURE COMPLIANCE WITH THIS PART 16, AND THE STATEMENT OR
DOCUMENTATION IS NOT SUBJECT TO DISCLOSURE UNDER THE "COLORADO
OPEN RECORDS ACT", PART 2 OF ARTICLE 72 OF TITLE 24. IN A DISCLOSURE
PURSUANT TO THIS SUBSECTION (7), A DEVELOPER MAY DESIGNATE THE
STATEMENT OR DOCUMENTATION AS INCLUDING PROPRIETARY INFORMATION OR
A TRADE SECRET. TO THE EXTENT THAT ANY INFORMATION CONTAINED IN THE
STATEMENT OR DOCUMENTATION INCLUDES INFORMATION SUBJECT TO
ATTORNEY-CLIENT PRIVILEGE OR WORK-PRODUCT PROTECTION, THE
DISCLOSURE DOES NOT CONSTITUTE A WAIVER OF THE PRIVILEGE OR
PROTECTION.
6-1-1603. Deployer duty to avoid algorithmic discrimination - risk
management policy and program. (1) ON AND AFTER FEBRUARY 1, 2026, A
DEPLOYER OF A HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM SHALL USE
REASONABLE CARE TO PROTECT CONSUMERS FROM ANY KNOWN OR
REASONABLY FORESEEABLE RISKS OF ALGORITHMIC DISCRIMINATION. IN ANY
ENFORCEMENT ACTION BROUGHT ON OR AFTER FEBRUARY 1, 2026, BY THE
ATTORNEY GENERAL PURSUANT TO SECTION 6-1-1606, THERE IS A REBUTTABLE
PRESUMPTION THAT A DEPLOYER OF A HIGH-RISK ARTIFICIAL INTELLIGENCE
SYSTEM USED REASONABLE CARE AS REQUIRED UNDER THIS SECTION IF THE
DEPLOYER COMPLIED WITH THIS SECTION AND ANY ADDITIONAL REQUIREMENTS
OR OBLIGATIONS AS SET FORTH IN RULES PROMULGATED BY THE ATTORNEY
GENERAL PURSUANT TO SECTION 6-1-1607.
(2) (a) ON AND AFTER FEBRUARY 1, 2026, AND EXCEPT AS PROVIDED IN
SUBSECTION (8) OF THIS SECTION, A DEPLOYER OF A HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM SHALL IMPLEMENT A RISK MANAGEMENT POLICY AND
PROGRAM TO GOVERN THE DEPLOYER'S DEPLOYMENT OF THE HIGH-RISK
ARTIFICIAL INTELLIGENCE SYSTEM. THE RISK MANAGEMENT POLICY AND
PROGRAM MUST SPECIFY AND INCORPORATE THE PRINCIPLES, PROCESSES, AND
PERSONNEL THAT THE DEPLOYER USES TO IDENTIFY, DOCUMENT, AND MITIGATE
KNOWN OR REASONABLY FORESEEABLE RISKS OF ALGORITHMIC
DISCRIMINATION. THE RISK MANAGEMENT POLICY AND PROGRAM MUST BE AN
ITERATIVE PROCESS PLANNED, IMPLEMENTED, AND REGULARLY AND
SYSTEMATICALLY REVIEWED AND UPDATED OVER THE LIFE CYCLE OF A
HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM, REQUIRING REGULAR,
SYSTEMATIC REVIEW AND UPDATES. A RISK MANAGEMENT POLICY AND
PROGRAM IMPLEMENTED AND MAINTAINED PURSUANT TO THIS SUBSECTION (2)
MUST BE REASONABLE CONSIDERING:
(I) (A) THE GUIDANCE AND STANDARDS SET FORTH IN THE LATEST
VERSION OF THE "ARTIFICIAL INTELLIGENCE RISK MANAGEMENT FRAMEWORK"
PUBLISHED BY THE NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY IN
THE UNITED STATES DEPARTMENT OF COMMERCE, STANDARD ISO/IEC 42001
OF THE INTERNATIONAL ORGANIZATION FOR STANDARDIZATION, OR ANOTHER
NATIONALLY OR INTERNATIONALLY RECOGNIZED RISK MANAGEMENT
FRAMEWORK FOR ARTIFICIAL INTELLIGENCE SYSTEMS; OR
(B) ANY RISK MANAGEMENT FRAMEWORK FOR ARTIFICIAL
INTELLIGENCE SYSTEMS THAT THE ATTORNEY GENERAL, IN THE ATTORNEY
GENERAL'S DISCRETION, MAY DESIGNATE;
(II) THE SIZE AND COMPLEXITY OF THE DEPLOYER;
(III) THE NATURE AND SCOPE OF THE HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEMS DEPLOYED BY THE DEPLOYER, INCLUDING THE
INTENDED USES OF THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEMS; AND
(IV) THE SENSITIVITY AND VOLUME OF DATA PROCESSED IN
CONNECTION WITH THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEMS
DEPLOYED BY THE DEPLOYER.
(b) A RISK MANAGEMENT POLICY AND PROGRAM IMPLEMENTED
PURSUANT TO SUBSECTION (2)(a) OF THIS SECTION MAY COVER MULTIPLE
HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEMS DEPLOYED BY THE DEPLOYER.
(3) (a) EXCEPT AS PROVIDED IN SUBSECTIONS (3)(d), (3)(e), AND (6) OF
THIS SECTION:
(I) A DEPLOYER, OR A THIRD PARTY CONTRACTED BY THE DEPLOYER,
THAT DEPLOYS A HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM ON OR AFTER
FEBRUARY 1, 2026, SHALL COMPLETE AN IMPACT ASSESSMENT FOR THE
HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM; AND
(II) ON AND AFTER FEBRUARY 1, 2026, A DEPLOYER, OR A THIRD PARTY
CONTRACTED BY THE DEPLOYER, SHALL COMPLETE AN IMPACT ASSESSMENT FOR
A DEPLOYED HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM AT LEAST ANNUALLY
AND WITHIN NINETY DAYS AFTER ANY INTENTIONAL AND SUBSTANTIAL
MODIFICATION TO THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM IS MADE
AVAILABLE.
(b) AN IMPACT ASSESSMENT COMPLETED PURSUANT TO THIS
SUBSECTION (3) MUST INCLUDE, AT A MINIMUM, AND TO THE EXTENT
REASONABLY KNOWN BY OR AVAILABLE TO THE DEPLOYER:
(I) A STATEMENT BY THE DEPLOYER DISCLOSING THE PURPOSE,
INTENDED USE CASES, AND DEPLOYMENT CONTEXT OF, AND BENEFITS AFFORDED
BY, THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM;
(II) AN ANALYSIS OF WHETHER THE DEPLOYMENT OF THE HIGH-RISK
ARTIFICIAL INTELLIGENCE SYSTEM POSES ANY KNOWN OR REASONABLY
FORESEEABLE RISKS OF ALGORITHMIC DISCRIMINATION AND, IF SO, THE NATURE
OF THE ALGORITHMIC DISCRIMINATION AND THE STEPS THAT HAVE BEEN TAKEN
TO MITIGATE THE RISKS;
(III) A DESCRIPTION OF THE CATEGORIES OF DATA THE HIGH-RISK
ARTIFICIAL INTELLIGENCE SYSTEM PROCESSES AS INPUTS AND THE OUTPUTS THE
HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM PRODUCES;
(IV) IF THE DEPLOYER USED DATA TO CUSTOMIZE THE HIGH-RISK
ARTIFICIAL INTELLIGENCE SYSTEM, AN OVERVIEW OF THE CATEGORIES OF DATA
THE DEPLOYER USED TO CUSTOMIZE THE HIGH-RISK ARTIFICIAL INTELLIGENCE
SYSTEM;
(V) ANY METRICS USED TO EVALUATE THE PERFORMANCE AND KNOWN
LIMITATIONS OF THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM;
(VI) A DESCRIPTION OF ANY TRANSPARENCY MEASURES TAKEN
CONCERNING THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM, INCLUDING ANY
MEASURES TAKEN TO DISCLOSE TO A CONSUMER THAT THE HIGH-RISK
ARTIFICIAL INTELLIGENCE SYSTEM IS IN USE WHEN THE HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM IS IN USE; AND
(VII) A DESCRIPTION OF THE POST-DEPLOYMENT MONITORING AND USER
SAFEGUARDS PROVIDED CONCERNING THE HIGH-RISK ARTIFICIAL INTELLIGENCE
SYSTEM, INCLUDING THE OVERSIGHT, USE, AND LEARNING PROCESS
ESTABLISHED BY THE DEPLOYER TO ADDRESS ISSUES ARISING FROM THE
DEPLOYMENT OF THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM.
(c) IN ADDITION TO THE INFORMATION REQUIRED UNDER SUBSECTION
(3)(b) OF THIS SECTION, AN IMPACT ASSESSMENT COMPLETED PURSUANT TO THIS
SUBSECTION (3) FOLLOWING AN INTENTIONAL AND SUBSTANTIAL MODIFICATION
TO A HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM ON OR AFTER FEBRUARY 1,
2026, MUST INCLUDE A STATEMENT DISCLOSING THE EXTENT TO WHICH THE
HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM WAS USED IN A MANNER THAT WAS
CONSISTENT WITH, OR VARIED FROM, THE DEVELOPER'S INTENDED USES OF THE
HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM.
(d) A SINGLE IMPACT ASSESSMENT MAY ADDRESS A COMPARABLE SET
OF HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEMS DEPLOYED BY A DEPLOYER.
(e) IF A DEPLOYER, OR A THIRD PARTY CONTRACTED BY THE DEPLOYER,
COMPLETES AN IMPACT ASSESSMENT FOR THE PURPOSE OF COMPLYING WITH
ANOTHER APPLICABLE LAW OR REGULATION, THE IMPACT ASSESSMENT
SATISFIES THE REQUIREMENTS ESTABLISHED IN THIS SUBSECTION (3) IF THE
IMPACT ASSESSMENT IS REASONABLY SIMILAR IN SCOPE AND EFFECT TO THE
IMPACT ASSESSMENT THAT WOULD OTHERWISE BE COMPLETED PURSUANT TO
THIS SUBSECTION (3).
(f) A DEPLOYER SHALL MAINTAIN THE MOST RECENTLY COMPLETED
IMPACT ASSESSMENT FOR A HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM AS
REQUIRED UNDER THIS SUBSECTION (3), ALL RECORDS CONCERNING EACH
IMPACT ASSESSMENT, AND ALL PRIOR IMPACT ASSESSMENTS, IF ANY, FOR AT
LEAST THREE YEARS FOLLOWING THE FINAL DEPLOYMENT OF THE HIGH-RISK
ARTIFICIAL INTELLIGENCE SYSTEM.
(g) ON OR BEFORE FEBRUARY 1, 2026, AND AT LEAST ANNUALLY
THEREAFTER, A DEPLOYER, OR A THIRD PARTY CONTRACTED BY THE DEPLOYER,
MUST REVIEW THE DEPLOYMENT OF EACH HIGH-RISK ARTIFICIAL INTELLIGENCE
SYSTEM DEPLOYED BY THE DEPLOYER TO ENSURE THAT THE HIGH-RISK
ARTIFICIAL INTELLIGENCE SYSTEM IS NOT CAUSING ALGORITHMIC
DISCRIMINATION.
(4) (a) ON AND AFTER FEBRUARY 1, 2026, AND NO LATER THAN THE
TIME THAT A DEPLOYER DEPLOYS A HIGH-RISK ARTIFICIAL INTELLIGENCE
SYSTEM TO MAKE, OR BE A SUBSTANTIAL FACTOR IN MAKING, A CONSEQUENTIAL
DECISION CONCERNING A CONSUMER, THE DEPLOYER SHALL:
(I) NOTIFY THE CONSUMER THAT THE DEPLOYER HAS DEPLOYED A
HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM TO MAKE, OR BE A SUBSTANTIAL
FACTOR IN MAKING, A CONSEQUENTIAL DECISION BEFORE THE DECISION IS
MADE;
(II) PROVIDE TO THE CONSUMER A STATEMENT DISCLOSING THE
PURPOSE OF THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM AND THE NATURE
OF THE CONSEQUENTIAL DECISION; THE CONTACT INFORMATION FOR THE
DEPLOYER; A DESCRIPTION, IN PLAIN LANGUAGE, OF THE HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM; AND INSTRUCTIONS ON HOW TO ACCESS THE STATEMENT
REQUIRED BY SUBSECTION (5)(a) OF THIS SECTION; AND
(III) PROVIDE TO THE CONSUMER INFORMATION, IF APPLICABLE,
REGARDING THE CONSUMER'S RIGHT TO OPT OUT OF THE PROCESSING OF
PERSONAL DATA CONCERNING THE CONSUMER FOR PURPOSES OF PROFILING IN
FURTHERANCE OF DECISIONS THAT PRODUCE LEGAL OR SIMILARLY SIGNIFICANT
EFFECTS CONCERNING THE CONSUMER UNDER SECTION 6-1-1306 (1)(a)(I)(C).
(b) ON AND AFTER FEBRUARY 1, 2026, A DEPLOYER THAT HAS
DEPLOYED A HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM TO MAKE, OR BE A
SUBSTANTIAL FACTOR IN MAKING, A CONSEQUENTIAL DECISION CONCERNING A
CONSUMER SHALL, IF THE CONSEQUENTIAL DECISION IS ADVERSE TO THE
CONSUMER, PROVIDE TO THE CONSUMER:
(I) A STATEMENT DISCLOSING THE PRINCIPAL REASON OR REASONS FOR
THE CONSEQUENTIAL DECISION, INCLUDING:
(A) THE DEGREE TO WHICH, AND MANNER IN WHICH, THE HIGH-RISK
ARTIFICIAL INTELLIGENCE SYSTEM CONTRIBUTED TO THE CONSEQUENTIAL
DECISION;
(B) THE DATA THAT WAS PROCESSED BY THE HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM IN MAKING THE CONSEQUENTIAL DECISION; AND
(C) THE SOURCE OR SOURCES OF THE DATA DESCRIBED IN SUBSECTION
(4)(b)(I)(B) OF THIS SECTION;
(II) AN OPPORTUNITY TO CORRECT ANY INCORRECT PERSONAL DATA
THAT THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM PROCESSED IN MAKING,
OR AS A SUBSTANTIAL FACTOR IN MAKING, THE CONSEQUENTIAL DECISION; AND
(III) AN OPPORTUNITY TO APPEAL AN ADVERSE CONSEQUENTIAL
DECISION CONCERNING THE CONSUMER ARISING FROM THE DEPLOYMENT OF A
HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM, WHICH APPEAL MUST, IF
TECHNICALLY FEASIBLE, ALLOW FOR HUMAN REVIEW UNLESS PROVIDING THE
OPPORTUNITY FOR APPEAL IS NOT IN THE BEST INTEREST OF THE CONSUMER,
INCLUDING IN INSTANCES IN WHICH ANY DELAY MIGHT POSE A RISK TO THE LIFE
OR SAFETY OF SUCH CONSUMER.
(c) (I) THE CONSUMER, BASED ON THE INFORMATION IN THE STATEMENT
PROVIDED PURSUANT TO SUBSECTION (4)(b)(I) OF THIS SECTION, BEARS THE
BURDEN OF DEMONSTRATING THAT THERE WAS A MATERIAL ERROR OR OMISSION
WARRANTING HUMAN REVIEW PURSUANT TO SUBSECTION (4)(b)(III) OF THIS
SECTION.
(II) A DEPLOYER THAT HAS DEPLOYED A HIGH-RISK ARTIFICIAL
INTELLIGENCE SYSTEM TO MAKE, OR BE A SUBSTANTIAL FACTOR IN MAKING, A
CONSEQUENTIAL DECISION CONCERNING A CONSUMER MAY CONTRACTUALLY
AGREE TO HAVE A DEVELOPER PROVIDE THE NOTICES AND DISCLOSURES TO AND
CONDUCT THE APPEAL PROCESS REQUIRED BY THIS SUBSECTION (4) FOR
CONSUMERS.
(d) (I) EXCEPT AS PROVIDED IN SUBSECTION (4)(d)(II) OF THIS SECTION,
A DEPLOYER SHALL PROVIDE THE NOTICE, STATEMENT, CONTACT INFORMATION,
AND DESCRIPTION REQUIRED BY SUBSECTIONS (4)(a) AND (4)(b) OF THIS
SECTION:
(A) DIRECTLY TO THE CONSUMER;
(B) IN PLAIN LANGUAGE;
(C) IN ALL LANGUAGES IN WHICH THE DEPLOYER, IN THE ORDINARY
COURSE OF THE DEPLOYER'S BUSINESS, PROVIDES CONTRACTS, DISCLAIMERS,
SALE ANNOUNCEMENTS, AND OTHER INFORMATION TO CONSUMERS; AND
(D) IN A FORMAT THAT IS ACCESSIBLE TO CONSUMERS WITH
DISABILITIES.
(II) IF THE DEPLOYER IS UNABLE TO PROVIDE THE NOTICE, STATEMENT,
CONTACT INFORMATION, AND DESCRIPTION REQUIRED BY SUBSECTIONS (4)(a)
AND (4)(b) OF THIS SECTION DIRECTLY TO THE CONSUMER, THE DEPLOYER
SHALL MAKE THE NOTICE, STATEMENT, CONTACT INFORMATION, AND
DESCRIPTION AVAILABLE IN A MANNER THAT IS REASONABLY CALCULATED TO
ENSURE THAT THE CONSUMER RECEIVES THE NOTICE, STATEMENT, CONTACT
INFORMATION, AND DESCRIPTION.
(5) (a) ON AND AFTER FEBRUARY 1, 2026, AND EXCEPT AS PROVIDED IN
SUBSECTION (6) OF THIS SECTION, A DEPLOYER SHALL MAKE AVAILABLE, IN A
MANNER THAT IS CLEAR AND READILY AVAILABLE ON THE DEPLOYER'S WEBSITE,
A STATEMENT SUMMARIZING:
(I) THE TYPES OF HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEMS THAT
ARE CURRENTLY DEPLOYED BY THE DEPLOYER;
(II) HOW THE DEPLOYER MANAGES KNOWN OR REASONABLY
FORESEEABLE RISKS OF ALGORITHMIC DISCRIMINATION THAT MAY ARISE FROM
THE DEPLOYMENT OF EACH HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM
DESCRIBED PURSUANT TO SUBSECTION (5)(a)(I) OF THIS SECTION; AND
(III) IN DETAIL, THE NATURE, SOURCE, AND EXTENT OF THE
INFORMATION COLLECTED AND USED BY THE DEPLOYER.
(b) A DEPLOYER SHALL PERIODICALLY UPDATE THE STATEMENT
DESCRIBED IN SUBSECTION (5)(a) OF THIS SECTION.
(6) SUBSECTIONS (2) AND (3) OF THIS SECTION AND THIS SUBSECTION (6)
DO NOT APPLY TO A DEPLOYER IF, AT THE TIME THE DEPLOYER DEPLOYS A
HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM AND AT ALL TIMES WHILE THE
HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM IS DEPLOYED:
(a) THE DEPLOYER:
(I) EMPLOYS FEWER THAN FIFTY FULL-TIME EQUIVALENT EMPLOYEES;
AND
(II) DOES NOT USE THE DEPLOYER'S OWN DATA TO TRAIN THE HIGH-RISK
ARTIFICIAL INTELLIGENCE SYSTEM;
(b) THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM:
(I) IS USED FOR THE INTENDED USES THAT ARE DISCLOSED TO THE
DEPLOYER AS REQUIRED BY SECTION 6-1-1602 (2)(a); AND
(II) CONTINUES LEARNING BASED ON DATA DERIVED FROM SOURCES
OTHER THAN THE DEPLOYER'S OWN DATA; AND
(c) THE DEPLOYER MAKES AVAILABLE TO CONSUMERS ANY IMPACT
ASSESSMENT THAT:
(I) THE DEVELOPER OF THE HIGH-RISK ARTIFICIAL INTELLIGENCE
SYSTEM HAS COMPLETED AND PROVIDED TO THE DEPLOYER; AND
(II) INCLUDES INFORMATION THAT IS SUBSTANTIALLY SIMILAR TO THE
INFORMATION IN THE IMPACT ASSESSMENT REQUIRED UNDER SUBSECTION (3)(b)
OF THIS SECTION.
(7) IF A DEPLOYER DEPLOYS A HIGH-RISK ARTIFICIAL INTELLIGENCE
SYSTEM ON OR AFTER FEBRUARY 1, 2026, AND SUBSEQUENTLY DISCOVERS THAT
THE HIGH-RISK ARTIFICIAL INTELLIGENCE SYSTEM HAS CAUSED ALGORITHMIC
DISCRIMINATION, THE DEPLOYER, WITHOUT UNREASONABLE DELAY, BUT NO
LATER THAN NINETY DAYS AFTER THE DATE OF THE DISCOVERY, SHALL SEND TO
THE ATTORNEY GENERAL, IN A FORM AND MANNER PRESCRIBED BY THE
ATTORNEY GENERAL, A NOTICE DISCLOSING THE DISCOVERY.
(8) NOTHING IN SUBSECTIONS (2) TO (5) AND (7) OF THIS SECTION
REQUIRES A DEPLOYER TO DISCLOSE A TRADE SECRET OR OTHER CONFIDENTIAL
OR PROPRIETARY INFORMATION.
(9) ON AND AFTER FEBRUARY 1, 2026, THE ATTORNEY GENERAL MAY
REQUIRE THAT A DEPLOYER, OR A THIRD PARTY CONTRACTED BY THE
DEPLOYER, DISCLOSE TO THE ATTORNEY GENERAL, IN A FORM AND MANNER
PRESCRIBED BY THE ATTORNEY GENERAL, THE RISK MANAGEMENT POLICY
IMPLEMENTED PURSUANT TO SUBSECTION (2) OF THIS SECTION, THE IMPACT
ASSESSMENT COMPLETED PURSUANT TO SUBSECTION (3) OF THIS SECTION, OR
THE RECORDS MAINTAINED PURSUANT TO SUBSECTION (3)(f) OF THIS SECTION.
THE ATTORNEY GENERAL MAY EVALUATE THE RISK MANAGEMENT POLICY,
IMPACT ASSESSMENT, OR RECORDS TO ENSURE COMPLIANCE WITH THIS PART 16,
AND THE RISK MANAGEMENT POLICY, IMPACT ASSESSMENT, AND RECORDS ARE
NOT SUBJECT TO DISCLOSURE UNDER THE "COLORADO OPEN RECORDS ACT",
PART 2 OF ARTICLE 72 OF TITLE 24. IN A DISCLOSURE PURSUANT TO THIS
SUBSECTION (9), A DEPLOYER MAY DESIGNATE THE STATEMENT OR
DOCUMENTATION AS INCLUDING PROPRIETARY INFORMATION OR A TRADE
SECRET. TO THE EXTENT THAT ANY INFORMATION CONTAINED IN THE RISK
MANAGEMENT POLICY, IMPACT ASSESSMENT, OR RECORDS INCLUDE
INFORMATION SUBJECT TO ATTORNEY-CLIENT PRIVILEGE OR WORK-PRODUCT
PROTECTION, THE DISCLOSURE DOES NOT CONSTITUTE A WAIVER OF THE
PRIVILEGE OR PROTECTION.
6-1-1604. Disclosure of an artificial intelligence system to consumer.
(1) ON AND AFTER FEBRUARY 1, 2026, AND EXCEPT AS PROVIDED IN
SUBSECTION (2) OF THIS SECTION, A DEPLOYER OR OTHER DEVELOPER THAT
DEPLOYS, OFFERS, SELLS, LEASES, LICENSES, GIVES, OR OTHERWISE MAKES
AVAILABLE AN ARTIFICIAL INTELLIGENCE SYSTEM THAT IS INTENDED TO
INTERACT WITH CONSUMERS SHALL ENSURE THE DISCLOSURE TO EACH
CONSUMER WHO INTERACTS WITH THE ARTIFICIAL INTELLIGENCE SYSTEM THAT
THE CONSUMER IS INTERACTING WITH AN ARTIFICIAL INTELLIGENCE SYSTEM.
(2) DISCLOSURE IS NOT REQUIRED UNDER SUBSECTION (1) OF THIS
SECTION UNDER CIRCUMSTANCES IN WHICH IT WOULD BE OBVIOUS TO A
REASONABLE PERSON THAT THE PERSON IS INTERACTING WITH A HIGH-RISK
ARTIFICIAL INTELLIGENCE SYSTEM.
6-1-1605. Compliance with other legal obligations - definitions.
(1) NOTHING IN THIS PART 16 RESTRICTS A DEVELOPER'S, A DEPLOYER'S, OR
OTHER PERSON'S ABILITY TO:
(a) COMPLY WITH FEDERAL, STATE, OR MUNICIPAL LAWS, ORDINANCES,
OR REGULATIONS;
(b) COMPLY WITH A CIVIL, CRIMINAL, OR REGULATORY INQUIRY,
INVESTIGATION, SUBPOENA, OR SUMMONS BY A FEDERAL, A STATE, A
MUNICIPAL, OR OTHER GOVERNMENTAL AUTHORITY;
(c) COOPERATE WITH A LAW ENFORCEMENT AGENCY CONCERNING
CONDUCT OR ACTIVITY THAT THE DEVELOPER, DEPLOYER, OR OTHER PERSON
REASONABLY AND IN GOOD FAITH BELIEVES MAY VIOLATE FEDERAL, STATE, OR
MUNICIPAL LAWS, ORDINANCES, OR REGULATIONS;
(d) INVESTIGATE, ESTABLISH, EXERCISE, PREPARE FOR, OR DEFEND
LEGAL CLAIMS;
(e) TAKE IMMEDIATE STEPS TO PROTECT AN INTEREST THAT IS
ESSENTIAL FOR THE LIFE OR PHYSICAL SAFETY OF A CONSUMER OR ANOTHER
INDIVIDUAL;
(f) BY ANY MEANS OTHER THAN THE USE OF FACIAL RECOGNITION
TECHNOLOGY, PREVENT, DETECT, PROTECT AGAINST, OR RESPOND TO SECURITY
INCIDENTS, IDENTITY THEFT, FRAUD, HARASSMENT, MALICIOUS OR DECEPTIVE
ACTIVITIES, OR ILLEGAL ACTIVITY; INVESTIGATE, REPORT, OR PROSECUTE THE
PERSONS RESPONSIBLE FOR ANY SUCH ACTION; OR PRESERVE THE INTEGRITY OR
SECURITY OF SYSTEMS;
(g) ENGAGE IN PUBLIC OR PEER-REVIEWED SCIENTIFIC OR STATISTICAL
RESEARCH IN THE PUBLIC INTEREST THAT ADHERES TO ALL OTHER APPLICABLE
ETHICS AND PRIVACY LAWS AND IS CONDUCTED IN ACCORDANCE WITH 45 CFR
46, AS AMENDED, OR RELEVANT REQUIREMENTS ESTABLISHED BY THE FEDERAL
FOOD AND DRUG ADMINISTRATION;
(h) CONDUCT RESEARCH, TESTING, AND DEVELOPMENT ACTIVITIES
REGARDING AN ARTIFICIAL INTELLIGENCE SYSTEM OR MODEL, OTHER THAN
TESTING CONDUCTED UNDER REAL-WORLD CONDITIONS, BEFORE THE ARTIFICIAL
INTELLIGENCE SYSTEM OR MODEL IS PLACED ON THE MARKET, DEPLOYED, OR
PUT INTO SERVICE, AS APPLICABLE; OR
(i) ASSIST ANOTHER DEVELOPER, DEPLOYER, OR OTHER PERSON WITH
ANY OF THE OBLIGATIONS IMPOSED UNDER THIS PART 16.
(2) THE OBLIGATIONS IMPOSED ON DEVELOPERS, DEPLOYERS, OR OTHER
PERSONS UNDER THIS PART 16 DO NOT RESTRICT A DEVELOPER'S, A DEPLOYER'S,
OR OTHER PERSON'S ABILITY TO:
(a) EFFECTUATE A PRODUCT RECALL; OR
(b) IDENTIFY AND REPAIR TECHNICAL ERRORS THAT IMPAIR EXISTING OR
INTENDED FUNCTIONALITY.
(3) THE OBLIGATIONS IMPOSED ON DEVELOPERS, DEPLOYERS, OR OTHER
PERSONS UNDER THIS PART 16 DO NOT APPLY WHERE COMPLIANCE WITH THIS
PART 16 BY THE DEVELOPER, DEPLOYER, OR OTHER PERSON WOULD VIOLATE AN
EVIDENTIARY PRIVILEGE UNDER THE LAWS OF THIS STATE.
(4) NOTHING IN THIS PART 16 IMPOSES ANY OBLIGATION ON A
DEVELOPER, A DEPLOYER, OR OTHER PERSON THAT ADVERSELY AFFECTS THE
RIGHTS OR FREEDOMS OF A PERSON, INCLUDING THE RIGHTS OF A PERSON TO
FREEDOM OF SPEECH OR FREEDOM OF THE PRESS THAT ARE GUARANTEED IN:
(a) THE FIRST AMENDMENT TO THE UN
Senate Journal, May 3
SB24-205 by Senator(s) Rodriguez; also Representative(s) Titone and Rutinel--Concerning consumer
protections in interactions with artificial intelligence systems.
A majority of those elected to the Senate having voted in the affirmative, Senator
Rodriguez was given permission to offer a third reading amendment.
Third Reading Amendment No. 1(L.005) , by Senator Rodriguez.
Amend engrossed bill, page 4, lines 9 and 10, strike "MATERIALLY INCREASES
THE RISK OF" and substitute "RESULTS IN".
Page 13, lines 6 and 7, strike "SECRET OR OTHER CONFIDENTIAL OR
PROPRIETARY INFORMATION." and substitute "SECRET, OTHER CONFIDENTIAL OR
PROPRIETARY INFORMATION, OR INFORMATION THAT WOULD CREATE A
SECURITY RISK TO THE DEVELOPER.".
Page 19, line 20, before "DATA" insert "TYPE OF".
Page 20, strike lines 8 through 19 and substitute:
"(c) (I) EXCEPT AS PROVIDED IN SUBSECTION (4)(c)(II) OF THIS".
The amendment was passed on the following roll call vote:
YES 33 NO 1 EXCUSED 1 ABSENT 0
Baisley Y Ginal Y Marchman E Simpson Y
Bridges Y Gonzales Y Michaelson Y Smallwood Y
Buckner Y Hansen Y Mullica Y Sullivan Y
Coleman Y Hinrichsen Y Pelton B. Y Van Winkle Y
Cutter Y Jaquez Y Pelton R. Y Will Y
Danielson Y Kirkmeyer Y Priola Y Winter F. Y
Exum Y Kolker Y Rich N Zenzinger Y
Fields Y Liston Y Roberts Y President Y
Gardner Y Lundeen Y Rodriguez Y
The question being "Shall the bill, as amended, pass?", the roll call was taken with the
following result:
House Journal, May 5
46 SB24-205 be amended as follows, and as so amended, be referred to
47 the Committee of the Whole with favorable
48 recommendation:
49
50 Amend reengrossed bill, page 7, line 15 after "COMMUNICATES" insert
51 "WITH CONSUMERS".
52
53 Page 13, lines 6 and 7, strike "OTHER CONFIDENTIAL OR PROPRIETARY
54 INFORMATION," and substitute "INFORMATION PROTECTED FROM
55 DISCLOSURE BY STATE OR FEDERAL LAW,".
56
1 Page 13, line 11, before "IN" insert "NO LATER THAN NINETY DAYS AFTER
2 THE REQUEST AND".
3
4 Page 22, lines 21 and 22, strike "OTHER CONFIDENTIAL OR PROPRIETARY
5 INFORMATION." and substitute "INFORMATION PROTECTED FROM
6 DISCLOSURE BY STATE OR FEDERAL LAW. TO THE EXTENT THAT A
7 DEPLOYER WITHHOLDS INFORMATION PURSUANT TO THIS SUBSECTION (8),
8 THE DEPLOYER SHALL NOTIFY THE CONSUMER AND PROVIDE A BASIS FOR
9 THE WITHHOLDING.".
10
11 Page 22, line 25, before "IN" insert "NO LATER THAN NINETY DAYS AFTER
12 THE REQUEST AND".
13
14 Page 26, line 16, strike "TECHNOLOGY;" and substitute "TECHNOLOGY IF
15 THE STANDARDS ARE SUBSTANTIALLY EQUIVALENT OR MORE STRINGENT
16 THAN THE REQUIREMENTS OF THIS PART 16;".
17
18 Page 27, strike lines 23 through 27 and substitute:
19
20 "(7) AN INSURER, AS DEFINED IN SECTION 10-1-102 (13), A
21 FRATERNAL BENEFIT SOCIETY, AS DESCRIBED IN SECTION 10-14-102, OR A
22 DEVELOPER OF AN ARTIFICIAL INTELLIGENCE SYSTEM USED BY AN INSURER
23 IS IN FULL COMPLIANCE WITH THIS PART 16 IF THE INSURER, THE
24 FRATERNAL BENEFIT SOCIETY, OR THE DEVELOPER IS SUBJECT TO THE
25 REQUIREMENTS OF SECTION 10-3-1104.9 AND ANY RULES ADOPTED BY THE
26 COMMISSIONER OF INSURANCE PURSUANT TO SECTION 10-3-1104.9.".
27
28 Page 28, strike lines 1 through 4.
29
30 Page 30, line 2, after "DISCOVERS" insert "AND CURES".
31
32
House Journal, May 8
48 Amendment No. 1, State, Civic, Military, & Veterans Affairs Report,
49 dated May 4, 2024, and placed in member's bill file; Report also printed
50 in House Journal, May 5, 2024.
51
52 Amendment No. 2, by Representative Rutinel:
53
54 Amend the State, Civic, Military, and Veterans Affairs Committee
55 Report, dated May 4, 2024, page 1, after line 2 insert:
56
1 "Page 10 of the reengrossed bill, line 5, strike "AND".
2
3 Page 10 of the bill, line 7, after "SYSTEM;" add "AND
4 (V) ALL OTHER INFORMATION NECESSARY TO ALLOW THE
5 DEPLOYER TO COMPLY WITH THE REQUIREMENTS OF SECTION 6-1-1603;".".
6
7 Page 1 of the report, after line 7 insert:
8
9 "Page 14 of the bill, line 11, strike "(8)" and substitute "(6)".
10
11 Page 15 of the bill, line 6, strike "SYSTEMS;" and substitute "SYSTEMS, IF
12 THE STANDARDS ARE SUBSTANTIALLY EQUIVALENT TO OR MORE
13 STRINGENT THAN THE REQUIREMENTS OF THIS PART 16;".
14
15 Page 21 of the bill, lines 17 and 18, strike "(2) AND (3) OF THIS SECTION
16 AND THIS SUBSECTION (6)" and substitute "(2), (3), AND (5) OF THIS
17 SECTION".".
18
19 Page 1 of the report, line 11, strike "(8)," and substitute "(8) OR SECTION
20 6-1-1605 (5),".
21
22 Page 2 of the report, after line 7 insert:
23
24 "Page 30 of the bill, line 17, strike "SYSTEMS;" and substitute "SYSTEMS,
25 IF THE STANDARDS ARE SUBSTANTIALLY EQUIVALENT TO OR MORE
26 STRINGENT THAN THE REQUIREMENTS OF THIS PART 16;".".
27
28 Amendment No. 3, by Representative Titone:
29
30 Amend the State, Civic, Military, and Veterans Affairs Committee
31 Report, dated May 4, 2024, page 1, strike lines 16 through 18 and
32 substitute:
33
34 "Page 26 of the bill, line 9, after "CLEARED," insert "DEVELOPED,".
35
36 Page 26 of the bill, line 12, strike "AUTHORITY;" and substitute
37 "AUTHORITY, OR BY A REGULATED ENTITY SUBJECT TO THE SUPERVISION
38 AND REGULATION OF THE FEDERAL HOUSING FINANCE AGENCY;".
39
40 Page 26 of the bill, line 16, strike "TECHNOLOGY;" and substitute
41 "TECHNOLOGY, OR BY A REGULATED ENTITY SUBJECT TO THE SUPERVISION
42 AND REGULATION OF THE FEDERAL HOUSING FINANCE AGENCY, IF THE
43 STANDARDS ARE SUBSTANTIALLY EQUIVALENT OR MORE STRINGENT THAN
44 THE REQUIREMENTS OF THIS PART 16;".".
45
46 Amendment No. 4, by Representative Rutinel:
47
48 Amend reengrossed bill, page 9, line 21, strike "INTENDED USES" and
49 substitute "REASONABLY FORESEEABLE USES AND KNOWN HARMFUL OR
50 INAPPROPRIATE USES".
51
52 Page 10, line 22, before "DEPLOYMENT" insert "REASONABLY
53 FORESEEABLE".
54
55 Page 10, line 25, strike "USED OR" and substitute "USED, NOT BE USED,
56 AND".
1 As amended, ordered revised and placed on the Calendar for Third
2 Reading and Final Passage.
House Journal, May 8
41 Amend revised bill, page 24, line 10, strike "A HIGH-RISK" and substitute
42 "AN".
43
44 The amendment was declared passed by the following roll call vote:
45
46 YES 51 NO 13 EXCUSED 1 ABSENT
47 Amabile Y English Y Lindstedt Y Sirota Y
48 Armagost N Epps Y Luck N Snyder Y
49 Bacon Y Evans N Lukens Y Soper N
50 Bird Y Frizell N Lynch N Story Y
51 Bockenfeld E Froelich Y Mabrey Y Taggart Y
52 Boesenecker Y Garcia Y Marshall Y Titone Y
53 Bottoms N Hamrick Y Martinez Y Valdez Y
54 Bradfield Y Hartsook N Marvin Y Velasco Y
55 Bradley N Hernandez Y Mauro Y Vigil Y
56 Brown Y Herod Y McCormick Y Weinberg Y
1 Catlin Y Holtorf Y McLachlan Y Weissman Y
2 Clifford Y Jodeh Y Ortiz Y Willford Y
3 Daugherty Y Joseph Y Parenti Y Wilson N
4 DeGraaf N Kipp Y Pugliese Y Winter T. N
5 deGruy Kennedy Y Lieder Y Ricks Y Woodrow N
6 Duran Y Lindsay Y Rutinel Y Young Y
7 Speaker Y
8
9 The question being, "Shall the bill, as amended, pass?".
10 A roll call vote was taken. As shown by the following recorded vote, a
11 majority of those elected to the House voted in the affirmative, and the
12 bill, as amended, was declared passed.
13
14 YES 41 NO 22 EXCUSED 2 ABSENT
15 Amabile Y English Y Lindstedt N Sirota Y
16 Armagost N Epps N Luck N Snyder Y
17 Bacon Y Evans N Lukens Y Soper Y
18 Bird N Frizell N Lynch N Story Y
19 Bockenfeld E Froelich Y Mabrey Y Taggart N
20 Boesenecker Y Garcia Y Marshall N Titone Y
21 Bottoms N Hamrick Y Martinez Y Valdez Y
22 Bradfield N Hartsook N Marvin Y Velasco Y
23 Bradley N Hernandez Y Mauro Y Vigil Y
24 Brown Y Herod Y McCormick Y Weinberg N
25 Catlin N Holtorf N McLachlan Y Weissman Y
26 Clifford Y Jodeh Y Ortiz Y Willford Y
27 Daugherty N Joseph Y Parenti Y Wilson N
28 DeGraaf N Kipp Y Pugliese Y Winter T. N
29 deGruy Kennedy Y Lieder Y Ricks Y Woodrow N
30 Duran Y Lindsay Y Rutinel Y Young Y
31 Speaker E
32 Co-sponsor(s) added: Representative(s) Duran