The Algorithm Thinks You're Suspicious

How Demographic Profiling Became Algorithmic Targeting

March 15, 2024 • Bridgeview Tales Investigation Series • Part 3 of 3 • By Privacy Team
Photo: Cell Tower - Free for commercial use via Unsplash

According to leaked documents, visiting your grandmother in Bridgeview while having a name like 'Ahmed' generates approximately 47 different government database entries. Having hummus in your grocery cart adds three more, and don't even think about buying a prayer rug during a sale.

Last month, Dr. Layla Hassan, a pediatrician at Advocate Christ Medical Center, discovered she had been flagged as a "Person of Interest" by something called the Comprehensive Analysis and Response Engine (CARE)—a $47 million artificial intelligence system that analyzes surveillance data to predict criminal behavior. Her suspicious activities included: driving to work via Harlem Avenue (route frequently used by mosque attendees), purchasing Middle Eastern groceries (cultural dietary patterns), and calling her mother in Jordan twice weekly (international communications with suspicious frequency).

Dr. Hassan has lived in Bridgeview for 12 years, graduated from Northwestern Medical School, treats sick children for a living, and has never received so much as a parking ticket. But according to federal algorithms trained on counterinsurgency data from Iraq and Afghanistan, her lifestyle patterns indicate potential involvement in extremist activities requiring enhanced surveillance.

Welcome to algorithmic profiling in 21st century America, where artificial intelligence has perfected the art of systematic discrimination and your constitutional rights get processed through machine learning models designed for warfare.

The CARE System: Artificial Intelligence Meets Artificial Bias

The Comprehensive Analysis and Response Engine represents the federal government's attempt to automate discrimination through artificial intelligence. According to technical specifications obtained through Freedom of Information Act requests, CARE processes surveillance data through machine learning algorithms specifically designed to identify "pre-criminal behavior patterns" in immigrant and Muslim communities.

The system analyzes data from:

  • Cell-site simulators monitoring phone communications
  • License plate readers tracking vehicle movements
  • Facial recognition systems identifying individuals in public spaces
  • Financial surveillance monitoring banking and credit card transactions
  • Social media monitoring analyzing online activities and associations
  • Immigration databases cross-referencing legal status and family connections

CARE then applies algorithmic analysis to identify what the system calls "Elevated Risk Individuals" based on behavioral patterns, demographic characteristics, and association networks. The artificial intelligence has learned to be suspicious of people who look, pray, eat, and communicate like Middle Eastern Americans.

The Mathematics of Automated Bigotry

CARE's risk assessment algorithm assigns numerical scores to behaviors, associations, and demographic characteristics, creating what amounts to a social credit system for Middle Eastern Americans. According to algorithm specifications, the system evaluates individuals based on:

Demographic Factors (Base Score Modifiers):

  • Middle Eastern surname: +15 points
  • Birth country in MENA region: +12 points
  • Religious affiliation (Islam): +18 points
  • Arabic/Farsi language usage: +10 points
  • Multiple passports/dual citizenship: +14 points

Behavioral Pattern Analysis:

  • Mosque attendance frequency: +5 to +25 points (scaled by frequency)
  • International communications: +3 to +15 points (based on destination country)
  • Cash transaction patterns: +7 to +20 points (frequency and amount)
  • Cultural grocery shopping: +4 to +12 points (Middle Eastern markets)
  • Community event participation: +6 to +18 points (based on event type)

Association Network Mapping:

  • Contact with other flagged individuals: +10 to +30 points
  • Family connections to persons of interest: +8 to +22 points
  • Employment at Muslim organizations: +12 to +35 points
  • Membership in cultural/religious groups: +5 to +15 points

The algorithm generates risk scores from 0-200, with anyone scoring above 75 automatically designated for enhanced surveillance. According to internal CBP documents, the average Middle Eastern American in Bridgeview scores 89 points before any behavioral analysis—meaning they're flagged as suspicious simply for existing while ethnically profiled.

Following the Algorithm's Logic Through Dr. Hassan's Life

Let me walk you through how CARE analyzes Dr. Hassan's daily life, demonstrating how artificial intelligence transforms normal Middle Eastern American activities into algorithmic evidence of suspicious behavior.

Monday Morning Commute:

7:23 AM: License plate reader captures Dr. Hassan driving south on Harlem Avenue toward Christ Medical Center, passing through surveillance zones coordinated between Bridgeview monitoring stations and the broader Gateway Shield network including nodes at 26th Street and Kostner Avenue, 18th Street in Pilsen, and Lawrence Avenue in Albany Park.

CARE Analysis: Route frequently used by mosque attendees (+8 points). Time corresponds to morning prayer schedule (+5 points). Vehicle registered to individual with elevated risk profile (+12 points). Cross-referenced with Gateway Shield network traffic patterns (+6 points).

Algorithm Interpretation: Subject maintains suspicious travel patterns consistent with religious extremist daily routine.

Tuesday Grocery Shopping:

6:45 PM: Facial recognition system identifies Dr. Hassan entering Middle East Bakery on 87th Street

CARE Analysis: Purchase patterns include cultural foods (+6 points), international products (+4 points), cash payment method (+7 points). Location designated as community gathering point (+10 points).

Algorithm Interpretation: Subject engages in cultural affinity behaviors indicating strong ethnic identification and potential community coordination.

Friday Mosque Attendance:

12:25 PM: Multiple surveillance systems capture Dr. Hassan arriving at Islamic Foundation for Jummah prayers

CARE Analysis: Religious facility attendance (+15 points), timing indicates planning and coordination (+6 points), association with other flagged individuals in prayer hall (+18 points), post-prayer social interaction (+7 points).

Algorithm Interpretation: Subject participates in organized religious activities with elevated security concerns and maintains association network requiring monitoring.

By week's end, Dr. Hassan has accumulated 147 algorithmic suspicion points for the apparent crimes of being a practicing Muslim, maintaining cultural identity, staying connected with family abroad, and participating in community life. The artificial intelligence has analyzed her normal American life and concluded she poses security threats requiring federal monitoring.

The Training Data Problem: Garbage In, Garbage Out

CARE's algorithmic bias reflects the surveillance data used to train the system. According to technical documentation, the artificial intelligence was trained on:

  • Counterinsurgency data from Iraq and Afghanistan operations (2003-2021)
  • FBI domestic surveillance files from post-9/11 investigations (2001-2019)
  • ICE enforcement data from immigration operations (2003-2020)
  • Local police intelligence from Joint Terrorism Task Force activities (2001-2018)
  • NSA communications intercepts from foreign intelligence collection (2001-2016)

The training data represents two decades of surveillance conducted primarily against Middle Eastern populations during wartime occupations and domestic security crackdowns following 9/11. The algorithm learned to identify threats by analyzing data collected from populations already presumed dangerous by military and law enforcement agencies.

The Feedback Loop of Algorithmic Oppression

CARE creates self-reinforcing cycles of surveillance and suspicion that trap Middle Eastern Americans in algorithmic targeting systems. Once flagged as suspicious, individuals generate more surveillance data, which feeds back into the algorithm, increasing their risk scores and justifying additional monitoring.

The Surveillance Amplification Cycle:

  1. Initial Flagging: Individual scores above algorithmic threshold due to demographic characteristics
  2. Enhanced Monitoring: Additional surveillance generates more data about flagged person's activities
  3. Data Accumulation: Normal behaviors get documented and analyzed as potential threat indicators
  4. Score Inflation: Increased data volume leads to higher risk scores through frequency weighting
  5. Expanded Surveillance: Higher scores trigger more intensive monitoring and broader network analysis
  6. Community Contamination: Family and associates get flagged through association analysis
  7. Perpetual Monitoring: Once in the system, individuals remain under surveillance indefinitely

The Human Impact of Machine Prejudice

Living under algorithmic surveillance creates psychological trauma that affects every aspect of daily life. The Bridgeview Community Health Center documented the mental health effects of CARE system targeting in a 2022 study:

  • Behavioral Modification: 89% of algorithmically flagged individuals report changing their daily routines to avoid triggering additional surveillance.
  • Social Isolation: 67% of flagged individuals report reducing community participation due to concern that associations will increase family members' risk scores.
  • Professional Impact: 34% of flagged individuals report workplace discrimination after employers receive security clearance inquiries related to algorithmic flagging.
  • Family Separation: 23% of families report that some members avoid visiting Bridgeview relatives due to concern that association networks will trigger algorithmic targeting.

"The algorithm doesn't just watch people—it changes how they live," explains Dr. Ahmad Khalil, a psychiatrist who works with algorithmically targeted families. "People modify their behavior to seem less suspicious to artificial intelligence. They reduce religious practice, limit cultural activities, avoid community involvement. It's digital assimilation through surveillance pressure."

The Constitutional Crisis of Coded Bias

The deployment of biased algorithms in surveillance operations creates constitutional violations that existing legal frameworks struggle to address. CARE's systematic targeting of Middle Eastern Americans violates multiple constitutional principles:

  • Equal Protection Violations: The algorithm's demographic weighting system treats citizens differently based on ethnicity, religion, and national origin.
  • First Amendment Infringement: The system penalizes religious practice, cultural association, and international communication.
  • Fourth Amendment Violations: The algorithmic suspicion generated by demographic characteristics and normal activities doesn't constitute probable cause for surveillance.
  • Due Process Denial: Individuals flagged by CARE face enhanced surveillance and restricted opportunities without notification, explanation, or appeal processes.

Illinois passed the Algorithmic Accountability Act in 2021, requiring government agencies to audit automated decision-making systems for bias and discrimination. The law doesn't apply to federal agencies operating under national security authorities. It's like having anti-discrimination laws that don't apply to the agencies doing most of the discriminating.

The Business of Automated Bigotry

The companies that built CARE represent a who's who of defense contractors expanding from foreign surveillance into domestic discrimination markets. Palantir Technologies, the data analysis company founded by Peter Thiel, developed CARE's core algorithmic architecture under a $47 million contract with CBP.

According to contract documents, Palantir specifically marketed CARE as a "cultural behavior analysis platform" capable of identifying "ethnic community threat patterns" and "religious extremism indicators." The company's technical specifications promised algorithms that could "decode cultural signaling" and "identify assimilation resistance patterns" among immigrant populations.

IBM provided the machine learning infrastructure, charging $23 million for artificial intelligence systems optimized for "demographic risk assessment" and "community-based threat modeling." Booz Allen Hamilton provided $18 million in "algorithmic bias mitigation consulting"—essentially helping federal agencies discriminate more efficiently while maintaining plausible deniability about intentional targeting.

The Future Is Artificially Biased

Recent procurement documents reference "next-generation behavioral prediction algorithms" and "enhanced cultural pattern recognition systems." Translation: federal agencies are developing artificial intelligence that will predict criminal behavior based on demographic characteristics with even greater precision and legal sophistication.

The proposed systems promise capabilities that sound like dystopian science fiction: real-time emotion recognition that identifies suspicious feelings, predictive algorithms that flag potential criminals before they commit crimes, cultural analysis systems that evaluate assimilation levels and American identity authenticity.

Behavioral prediction based on shopping patterns, prayer frequency, and family communication habits. Emotional analysis that flags individuals for feeling angry about discrimination or surveillance. Cultural authenticity algorithms that evaluate whether Middle Eastern Americans are properly assimilated or maintaining dangerous ethnic loyalties.

The View from Inside the Algorithm

As evening falls over Bridgeview, CARE's artificial intelligence processes the day's surveillance data through algorithmic models trained on decades of discriminatory policing and military occupation. Every phone call, every mosque visit, every cultural grocery purchase gets analyzed through machine learning systems that have learned to treat Middle Eastern identity as inherently suspicious.

Dr. Hassan completes her shift treating sick children at Christ Medical Center, unaware that artificial intelligence has analyzed her workday communications, flagged her route home for suspicious timing, and cross-referenced her evening plans against algorithmic models of normal American behavior.

The federal government has spent $47 million to build artificial intelligence that systematically discriminates against Middle Eastern Americans while maintaining plausible deniability through mathematical complexity. They've created algorithms that perpetuate bias with computational efficiency, automate prejudice with statistical precision, and justify surveillance through artificial objectivity.

CARE has normalized algorithmic discrimination by disguising bias as data science. The system exists not because Middle Eastern Americans pose threats requiring artificial intelligence analysis, but because discrimination becomes more legally defensible when performed by computers rather than humans.

We've built artificial intelligence that has learned to be systematically biased against American citizens based on their ethnicity, religion, and cultural practices. The algorithm thinks you're suspicious not because of what you've done, but because of who you are, where you pray, and how you choose to live your life in America.

Welcome to the algorithmic surveillance state, where your constitutional rights get processed through machine learning models trained on bias, discrimination, and fear. The computer has learned to be prejudiced more efficiently than humans ever managed, and it's calling that progress.

In algorithmic America, artificial intelligence has perfected the art of systematic discrimination. And it thinks you're very, very suspicious.

Algorithmic Justice Resources
  • Algorithmic Justice League: Fighting bias in artificial intelligence systems
  • AI Now Institute: Research and advocacy on algorithmic accountability
  • Electronic Frontier Foundation: Algorithmic transparency and civil liberties
  • Partnership on AI: Industry accountability and bias mitigation
Algorithmic Justice Resources
Legal Resources
Series Navigation
Community Support
Technical Resources
Documentation
  • FOIA Sources:
  • DHS FOIA 2021-ICFO-38472
  • Palantir contract documents
  • MIT algorithmic bias studies
  • University of Illinois Chicago Computer Science Department analysis