Computer Bias Homework and Popcorn hacks
Popcorn Hack #1
Example: Amazon’s AI Hiring Tool (Software)
Who is affected?
Women applying for technical positions.
Demonstration of Bias:
Amazon developed an AI-powered hiring tool to streamline the resume screening process. However, it was discovered that the system was biased against women. The AI model downgraded resumes containing the word “women” (e.g., “women’s chess club captain”) and favored those using masculine-coded language.
Potential Cause of Bias:
The bias likely arose from the training data. The AI was trained on resumes submitted to Amazon over a 10-year period, which predominantly came from male applicants. As a result, the model learned to favor male-associated language and experiences, reinforcing existing gender disparities rather than promoting equal opportunity.
Popcorn Hack #2
During a group school project, I had to rely on an online collaboration tool that frequently dropped my connection or didn’t sync my contributions in real time. This made it incredibly challenging to work effectively with my classmates, leaving me feeling frustrated and undervalued. Enhancing the tool’s reliability and incorporating better real-time synchronization features would help ensure that every team member, regardless of their tech setup, can contribute equally and confidently.
Popcorn Hack #3
Age Bias:
The app might recommend the same intensity of workouts to all users, assuming a “one-size-fits-all” approach. This could disadvantage older adults or younger users with different physical capacities.
Example: The app might suggest high-impact cardio sessions that are too strenuous for seniors or not challenging enough for younger, athletic users.
Physical Ability Bias:
Users with disabilities or limited mobility may be excluded from recommendations.
Example: The app could prioritize step count or running distance as primary fitness metrics, making it unfair for wheelchair users or those with mobility impairments.
Health Condition Bias:
The app may not account for chronic conditions (e.g., heart disease, diabetes) and could suggest activities that are unsafe for users with such conditions.
Example: A user with a heart condition might receive high-intensity workout recommendations that could put them at risk.
✅ Features to Ensure Fairness and Inclusivity
Customizable Profiles with Health Information:
Allow users to specify their age, physical abilities, and any relevant health conditions during onboarding.
Use this information to tailor fitness recommendations and performance evaluations accordingly.
Adaptive Goal Setting:
Instead of using standardized goals (e.g., 10,000 steps/day), implement adaptive goals based on individual user data.
For example, wheelchair users could have goals based on wheel rotations or movement duration rather than steps.
Diverse Exercise Recommendations:
Include accessible workouts, such as seated exercises, low-impact activities, and modified strength routines.
Offer filters for users to select workouts based on their abilities or preferences.
Health-Conscious Alerts:
Include warnings or alerts when suggested workouts exceed safe levels for users with chronic conditions.
Example: “Based on your health profile, this activity may be too intense. Consider a lower-intensity option.”
Inclusive Performance Metrics:
Go beyond step count and calories burned. Include metrics like flexibility improvement, balance, or movement consistency, which are more inclusive of diverse fitness abilities.
Accessibility Features:
Voice commands and screen reader support for visually impaired users.
Haptic feedback for users with hearing impairments.
Homework Hack #1
Platform: YouTube
Potential Bias:
YouTube’s recommendation algorithm tends to create echo chambers by promoting content similar to what users have previously watched. This can reinforce pre-existing interests or viewpoints, making it harder to discover diverse perspectives. Additionally, the algorithm may favor content with high engagement (likes, comments, and watch time), which can sometimes amplify sensational or controversial videos.
YouTube’s recommendations may also show bias in language and region preferences. For example, users in non-English-speaking countries often see a disproportionate amount of English content, even if they prefer local language content. Furthermore, creators from smaller regions or niche communities may struggle with visibility due to the platform’s prioritization of popular, widely-viewed content.
Cause of Bias:
-
Algorithm Design: The recommendation system heavily relies on user engagement metrics, which can create feedback loops that prioritize already popular content.
-
Data Collection: YouTube’s data collection focuses on viewing history, location, and search habits, which can lead to personalization at the expense of diversity.
-
Lack of Diverse Testing: If the algorithm is primarily trained on data from larger markets (e.g., the U.S. or English-speaking countries), it may be less effective in promoting content from smaller or underrepresented regions.
Proposed Solution:
YouTube could introduce a “Discovery Mode” toggle that promotes content outside of the user’s typical preferences. This mode could prioritize diverse creators, underrepresented languages, and a wider range of topics, helping to reduce content echo chambers.
Additionally, the platform could refine its diversity testing practices by ensuring that the recommendation algorithm is tested across a variety of languages, cultures, and content styles. This would help identify and reduce biases that favor dominant regions or popular content creators.