githubEdit

Week 43

Mon 20th Oct - Sun 26th Oct 2025

Monday 20th October 2025

Knowledge Base Workgroup

  • Type of meeting: Monthly

  • Present: Tevo [facilitator], Alfred Itodele [documenter], Ayomishuga, Tevo, Malik, Effiom, Rems, Alfred Itodele

  • Purpose: Aggregating Ambassador Program assets and relevant information under GitBook

  • Miro board: Linkarrow-up-right

  • Other media: Linkarrow-up-right

  • Working Docs:

Agenda item 1 - Workgroup Management, Proposals/Budget & Progress Reports - [resolved]

Discussion Points:

  • Discussed creating report templates, to streamline the report creation process

Decision Items:

  • A single person will create report templates for all related workgroups (Treasury, Process). An LLM will be used to generate a first draft based on the template, and GitHub task descriptions. The draft will be enriched with information from meeting summaries. The workgroup will collectively review and finalize the report in the next session. The workgroup will update the Budget Sheet collaboratively during meetings, to finalise proposal Report template would be reviewd during the next meeting

    • [effect] affectsOnlyThisWorkgroup

Action Items:

  • [action] Alfred to draft the meeting summary Effiom to review, edit, and publish the final meeting summary. Alfred to share the template he has been working on in the discord channel to be reviewed async. [assignee] Alfred Itodele, Effiom [due] 27 October 2025 [status] done

Agenda item 2 - GitBook Management & Organization - [resolved]

Discussion Points:

  • Addressed the issue of new items appearing on the organizing sheet without matching data sources; a new feature was implemented to clean names of special characters to improve matching. Discussed how to handle archiving or removing items from the GitBook and how to manage documents with conflicting information. A new batch of assets was identified for organization and inclusion in the GitBook.

Decision Items:

  • The group will proceed with contacting owner workgroups to resolve content conflicts. The distribution of new assets was confirmed.

    • [effect] affectsOnlyThisWorkgroup

Action Items:

Keywords/tags:

  • topics covered: Knowledge Base, GitBook, Asset Management, Budget Sheet, Q4 Progress Report, Proposals, Overview Document, Organising Sheet, Data Sources, Conflicts, Archiving

Thursday 23rd October 2025

AI Ethics WG

  • Type of meeting: One-off event

  • Present: LadyTempestt [facilitator], lola lawson, Stephen [QADAO] [documenter], LadyTempestt, Sucre n Spice, CallyFromAuron, EstherG, Kare Dahl, Mariia Lagutina, Prosper Etim, AshleyDawn, Alfred Itodele, Ayomi Shuga, PeterE, Sharmila, CollyPride, UKnowZork, lola lawson, others

  • Purpose: Session 1 of the BGI-25 Virtual Unconference: "Interviewing the public on AI ethics"

  • Meeting video: Linkarrow-up-right

  • Working Docs:

Narrative:

PRESENTATIONS The sNET Ambassador Program’s AI Ethics workgroup used interviews (recorded, anonymised, and coded) to explore public opinion on AI ethics.

Our interview process:

  • Using interviewers from the same demographic group as the interviewees elicited deeper insights;

  • Language access: Interviewing in languages other than English is a long-term aim, to avoid limiting whose voices are heard; sometimes, interviews were stilted if both interviewer and interviewee were not first-language English speakers. But a multi-lingual interview corpus will demand additional overhead for translation and checking the accuracy of translations

  • Beginner interviewers need some training, but “learning by doing” works well too

  • Transcription is important (to allow for varying audio quality, and for different accents)

  • Interviewers transcribing their own interviews was initially the plan, but in practice it didn’t necessarily produce better transcripts

  • AI transcription tools save time, and helpwith poor audio quality (due to e.g. bad Internet connections), but often misinterpret words, accents and background noise; so they were only used as a first pass, and human editing was always needed.

Early analysis and coding has found three key themes in the interviews:

  • human oversight is crucial for safety. AI autonomy without human control was seen as risky and disruptive.

  • inclusivity is itself a form of safety - people of different cultures and languages should be part of building and controlling AI, not just using it. People trust what they help to build

  • AI is a tool, not a replacement for people - but some do use it as a companion.

DISCUSSION

  1. Governance, trust, and power

  • Government vs. corporate trust: Some participants in the session, especially those in Nigeria, expressed low trust in the idea of governments controlling the development of AI due to issues such as censorship, manipulation of narratives, and low digital awareness amongst politicians. Some participants were more open to corporate involvement, but still worried about privacy and data misuse; they said it would be better to have multiple individuals hold the power in AI design and control, and preferred the idea of community-driven, participatory governance instead of putting power in the hands of one entitiy, such as a government or a private company.

  • Policy suggestions: Regulations should fit local contexts, include community voices, and avoid repeating “top-down” approaches. Collaborative and locally-led governance of AI was encouraged.

  1. Bias, datasets, and justice

  • Technical and social bias (e.g. in image generation tools): the need for both better dataset design and clearer legal protections.

  • Bias examples: Real examples were discussed, such as image-generation tools that have shown black people negatively and other races positively, indicating how social bias can be embedded in AI systems as a result of the biases of the system creators.

  • Issues such as recruiters using AI to determine if someone’s CV is AI generated - can we trust these kinds of uses of AI?

  • Legal gaps: Existing laws don’t fully address biased data training or misuse. Stronger accountability and rights around data ownership and retraining were recommended.

  • True inclusion: Inclusion isn’t just about adding people from different backgrounds; it’s also about including their values, experiences, and cultural knowledge in AI design.

  1. AI design, bias, and sexism

  • Many AI assistants use female voices and have female names, which can reinforce sex stereotypes of service roles being female-coded. The group suggested chatbots with a wider range of gendered voices, or gender-neutral voices, to undermine these stereotypes.

  1. Data privacy and consent

  • Participants raised concerns about unclear privacy policies in AI tools, lack of clarity of what personal data will be used for (e.g. whether it will be used to train AI further), and the difficulty of deleting personal data once it’s online. Consent should be more transparent and easy to manage.

  • Data sovereignIty: Data collected from local communities should remain beneficial to those communities and not be exploited by centralised organisations.

TAKEAWAYS/LEARNING POINTS:

  • Use AI transcription tools to save time, but always follow up with a human review before using the data.

  • Recruit more marginalized voices as interviewers to make research fairer, more accurate and more inclusive.

  • Create community-led policy drafts that protect data ownership and promote accountability.

  • Explore customizable voice options to remove gender and cultural bias.

  • When designing AI systems, human oversight, cultural inclusivity, and including human empathy are just as important as technical performance.

  • How we transcribe and document qualitative research affects how findings are understood.

  • AI governance should be community-based, not imposed from outside.

  • Tackling bias requires work at the dataset and design levels, with laws to back it up.

  • Next steps include training a wide range of interviewers, and building community-informed governance for African contexts.

Keywords/tags:

  • topics covered: AI ethics, interviewing, qualitative research, transcription, AI bias, data soveregnity, human-in-the-loop, gender bias, sex bias, sexism, African AI, Nigeria, AI governance, dataset fairness, recruitment, privacy, Anonymity & Transparency, Consent, data privacy, chatbots, AI design

  • emotions: reflective, honest, culturally conscious, insightful, grounded, urgent, Collaborative, lively

AI Sandbox/Think-tank

In this meeting we discussed:

  • Vasu Madaan presented his idea for a new approach to regulating AI agents on tthe SingularityNET tech stack, by means of a DAO structure.

This would allow a decentralised approach to managing ethical risks, via a common ethical framework to guide the behaviour of AI agents, and a reputation system to evaluate and enhance their rustworthiness and accountability.

Key system components:

  • superintelligence.io: ASI coordination

  • singularitynet.io: AI services

  • fetch.ai: Autonomous infrastructure

  • agentverse.ai: Deployment layer

  • hyperon.opencog.org: Cognitive core

See

  • slide deck here https://docs.google.com/presentation/d/1SUT5hd24HLzbxHqT88pkPybP-Ov_YhAii7qL3mARdeA/edit?usp=sharing

  • blogpost here https://docs.google.com/document/d/1lufZVGjK4Ltm-QmGd34iMcAxYh-LCj80oSEBJNgaBDc/edit?usp=sharing

for more details

Keywords/tags:

  • topics covered: DAOs, AI Agent, Decentralization, AI ethics, AI safety, accountability, human-in-the-loop, Human values, Fetch Ai, agentverse, OpenCOG Hyperon

African Guild

Narrative:

PRESENTATIONS

There were short presentations from 4 Ambassador Program workgroups, focusing on the issues raised for decentralised communities when they use AI tooling in their governance systems.

  1. Ethical AI use in community governance (Stephen Whitenstall, SingularityNET Archives) When we use AI tooling in our governance, e.g. to help us interpret our records, its outputs must be auditable. Good recordkeeping is obviously important for governance transparency; and knowledge graphs and other AI processes can help us interpret records, and track changes in our governance processes. But for good ethics, we need community verification of such AI-generated insights - so we should build in processes for a community to audit and verify what AI says about governance. Prompting our AI tools, and engineering the context, are important, but community verification of AI outputs is perhaps even more so. It is the community who determine what records “mean”.

  2. Community-first intelligence and African pathways to ethical and inclusive AI: Duke Peter (sNET Ambassador Program African Guild) AI connects us digitally but can isolate us socially - automated, contactless interactions can erode community. Western AI ethics frameworks tend to emphasise individual rights over collective wellbeing. By contrast, African Ubuntu philosophy (“I am because we are”) redefines the “intelligence” of “artificial intelligence” as shared values and cooperation. African Guild’s community reflection sessions have explored what “trustworthy AI” would mean for different cultures and professions, leading to conference papers, research reports, educational materials, and Medium articles. The goal is to ensure that AI systems connect with the values of diverse local communities. By creating ethical frameworks that fit a range of cultural contexts, AI governance becomes more inclusive, and rooted in the full range of human values.

  3. EthosGuard - an AI ethics & governance advisor agent (LordKizzy, AI Sandbox/ Think-Tank) EthosGuard is a proposed AI agent for ethical governance, designed for the SingularityNET Ambassador Program but applicable to any decentralised community. It supports fair and transparent decision-making, by helping people a) understand their community’s governance structures and b) analyse governance issues according to their community’s agreed values (e.g. decentralization, inclusion, etc). Context engineering is important - EthosGuard will be trained on the community’s governance documents and records, and on general DAO ethics. It also uses participatory feedback loops and transparent reasoning, so that every recommendation cites the exact ethical principle involved. This addresses risks such as bias, automation creep, and ethical opacity.

  4. Governance Framework (Guillermo Lucero Funes, R&D Guild)

The Governance Dashboard https://singularitynet-governance-dashboard.vercel.app/dashboard is a new tool, built on the ASI tech stack. to support the Ambassador Program’s consent-based decisionmaking. It integrates an AI assistant intended as an augmentation layer, helping community members synthesize data and clarify arguments. Its first use, for the budget decision for Q4 2025, was done without any contextualising, due to . short deadlines and low development budget - but the next iteration will use the community’s archives, decision records, and governance documents to provide context-aware guidance. As with other Ambassador Program tooling, context engineering is vital, and the itention is to engage the whole community with deciding exactly what material to use as context.

DISCUSSION

  1. The different workgroups each have their own take, so the work is not monolithic and we can contrast the different approaches; but all recognise the need for human control over the technology, and the need for some kind of context engineering.

  2. Data-first or context-first or values-first are complementay approaches that inform each other: the prompt-orientated approach takes community-specific data as context to say what our values actually are and prompt the AI to align with that; and the archival approach (more grounded-theory style?) examines the data for patterns to attempt to discover what the significant issues are, before designing prompts (e.g. if a particular topic keeps appearing in our meeting summaries, that suggests it is a significant issue in our community, and it could then become a prompt.) .

  3. Prompts are dynamic workflow of information, and can change in response to changes in the community. What is really interesting is tracking those bias shifts in a community - how a community evolves and responds to circumstances? .

  4. Use of blockchain may be a next step, for tracking and for immutability, and for auditability: if you create a prompt, e.g. a specific question, the prompt itself becomes a data object, and can be audited - it’s not passive, it has its own context, so you have an audit trail for it, which is especially useful if the prompt produces some unexpected results. (Compare Patrick Tobler of NMKR’s recent work in Cardano on a platform that provides AI agents that are auditable on the blockchain.) .

  5. The different paradigms and values of different kinds of communities presents an interesting cha;llenge - can we create a community AI that is flexible enough to accommodate different types of community? We want to include dialogue in our governance - it’s not only about what an AI can see from our written contexts, but also using our dialogues and discussions as a dataset - a way to really include the human element, by using the ways we engage and interact as a community as part of our contextual info. Discourse analysis? .

  6. Cultural difference - there are differences WITHIN a culture as well as between different cultures. Dissent within a culture creates richness, and can be undermined by monolithic systems or forced conformity - but how do we pronmpt AI to take account of internal dissent? .

  7. Could we create an AI model that is geographically aware, and interacts wih people in, say, Africa differenly from how it interacts with people in Europe or Latin America, respecting their cultural contexts? Because we can’t (and don’t want to) bting everyone under a universal umbrella. .

  8. RE meta-governance and communities auditing their own biases - bias is inevitable, and is not a bad thing in itself - e.g, a community could be biased in favour of inclusivity, or open-source. “Bias” can be another word for “values”. And change is inevitable too, your governance processes and biases may change.These self-audit processes are all quite internal to a community. So how do we build adaptable/generic tools, that other communities can also use - even communities that are not Web3 based? .

  9. In Cardano some years back, we hoped to be able to “plug in” any culture on top of the layer 1 blockchain governance. but cultural assumptions ended up being made - e.g the US-style idea of a constitution. Now, we have similar with AI communities - systems being designed not as a neutral base with “add-ons” for particular cultural contexrts, but wih cultural assumptions on the base level. .

  10. Using tools in a way that fits our context is not just geographic - there’s intersectionality with other forms of difference (sex, gender, sexuality, disability, age, rural/urban-ness etc). So what we are building is not so much the tool itself, as the processes for different kinds of communities to manage contextualisation, and the ways for a community or a person to see themselves reflected in the tool. Anyone can build a tool; but building blueprints or toolkits forthe community contextualising processes is harder. .

  11. Not knowing your history often undermines community - in an AI tooling context, that would mean not having control of what the questions are and what the contextual info is How do we ensure WE are the ones who get to say “This is what our community means”?

Keywords/tags:

  • topics covered: African Guild, Archives WorkGroup, R&D Guild, AI Sandbox/Think Tank, EthosGuard, Ai recordkeeping ethics, recordkeeping, institutional memory, Knowledge Graphs, AI governance, AI ethics, decentralized governance, diversity, Ubuntu, African philosophy, transparency, auditability, verification, ASI 1, context engineering, Human values

  • emotions: reflective, Collaborative, Educative, hopeful

Friday 24th October 2025

Marketing Guild

Discussion Points:

  • Updates and Introductions: LordKizzy opened the Marketing Guild call by welcoming members and outlined the agenda.

  • Update on the scavenger hunt initiative: Advance reported that he is currently experiencing a power outage, while Gorga mentioned that he has completed his graphics task.

Action Items:

  • [action] LordKizzy to edit the report and include reasons for the scavenger hunt initiative and ensure all details are in place [assignee] LordKizzy [due] 7 November 2025 [status] todo

  • [action] LordKizzy to drop a message in the channel to remind everyone about the need to meet and have a call regarding the scavenger hunt initiative. [assignee] LordKizzy [due] 7 November 2025 [status] todo

  • [action] LordKizzy to organize a call with Eric to finalize the timeline and budget for the scavenger hunt initiative. [assignee] LordKizzy [due] 7 November 2025 [status] todo

Keywords/tags:

  • topics covered: Scavenger hunt

  • emotions: Peacefull, Understanding., forward-looking

Last updated