githubEdit

Week 25

Mon 16th Jun - Sun 22nd Jun 2025

Tuesday 17th June 2025

Governance Workgroup

  • Type of meeting: Weekly

  • Present: PeterE [facilitator], UKnowZork [documenter], PeterE, UKnowZork, CallyFromAuron, Alfred Itodele, AndrewBen, AshleyDawn, Évéline Trinité, guillermolucero, hogantuso, Kateri, Maxmilez, Slate, Sucre n Spice

  • Purpose: Regular Weekly GovWG meeting

Narrative:

The meeting began with Peter presenting the Consent Process results, followed by a brief exploration of the findings. All Workgroups passed the quorum threshold, which was set at 17%. The Workgroup with the lowest interaction was the Knowledge Base, with 10 contributors per consent, but it still met the quorum requirement.

We all noted the progress made in this quarter's Consent Process—33 people participated, marking a clear increase from the previous quarter.

We then reviewed the Workgroups that received objections to determine whether those objections were valid. Several key themes that emerged were whether Workgroups are evidencing their impact/outcomes - whether groups have unallocated reserves that they should be spending; and whether WGs have a clear "roadmap" of what they intend to do.

Vani noted that it is up to the Workgroups to decide whether they want to respond to the objections. She also emphasized the need to track recurring issues and include them in our reports. She proposed adding a new agenda item: “How do we handle objections that cannot be resolved within the timeframe of the Consent Process?” Peter supported the idea, saying it would be valuable to have that discussion. Slate raised a question about how Workgroups should respond to objections. Both Peter and Vani responded, stating that a clear document outlining the objections should suffice.

For Consent Round 2 (same as last quarter) we concluded that no new objections may be raised. However, contributors can object to a Workgroup they've previously consented to if they now agree with existing objections, They can also consent to a Workgroup that they've previously objected to if they feel that the Workgroup have adequately addressed the issues.

Kateri also shared in the chat that she believes Workgroups should make it a priority to attend the monthly sync meetings. This would help members stay informed about ongoing progress and gain insights into each guild’s work.

We discussed anonymity in the Consent Process and learned that only 23.7% of contributors were uncomfortable with the lack of anonymization. Vani asked whether we are still planning to conduct sentiment analysis for the ongoing Consent Process. Peter responded that while it would be beneficial, he’s unsure if it can be integrated during the current process.

Wrapping up Peter also suggested that, moving forward, we should schedule the first Consent Process results meeting for two hours instead of one.

Decision Items:

  • The following Workgroups received no objections, so their budgets have passed: African Guild AI Ethics Sandbox/Think Tank Archives Governance Workgroup LatAm Moderators Onboarding Workgroup Process Guild Treasury Guild Treasury Automation Knowledge Base and GitHub PBL also passed - (Note: They have requested no budget for Q3.)

The following Workgroups that have at least one objection will be included in the 2nd round of the consent process. The Education Guild has 3 objections - one of the objections pointed out that a specific person in the guild keeps getting rewards that are too high and this can damage decentralization. Gamers Guild has 2 objections Marketing Guild has 2 objections R&D has 1 objection Strategy Guild has 1 objection Translation Workgroup has 2 objections Video Workgroup has 2 objections Writers Workgroup – 2 objections See Doc here: [Q3 2025 Budgets: Consent round 1 objection summary -https://docs.google.com/document/d/17zH0s6dl0_tDTgjuCtLCmczHbYIYpHCqz_6zY7OD0n0/edit?usp=sharing]

Action Items:

  • [action] We decided to create a document listing all objections, marking those that were considered potentially invalid during today’s meeting. This will be shared in the next few days. [status] todo

Keywords/tags:

  • topics covered: Consent Process, Core Contributor, Anonymity, Sentiments, Objections, Quorum, Participation in governance, WG Sync calls, measuring outcomes, impact, Workgroup reserves, Abstentions

  • emotions: informative, Collaborative, detailed, educative, Productive

Wednesday 18th June 2025

Education Workgroup

In this meeting we discussed:

  • Consent Process objections

  • Assessors role - Portfolio for Certification Program

Discussion Points:

  • The workgroup has 3 objections to its Q3 2025 budget. Peter noted that a document that has the objections has already been posted in the Ambassador Program channel. Slate read the objections and the first objection pointed at a misconception in the budget sheet. Slate noted that the budget seems fine, while Peter countered that maybe a thorough explanation on the budget sheet should be able to clarify the misconception. Slate went further to explain what the budget entails. Vasu will add a note in the budget sheet.

  • The second objection pointed at the guild reserves not being utilized. Slate noted that the reserves are kept for the CCCP program and Peter stated that it might sound like an inadequate response to the objection, while Slate noted that reserves breakdown was already given in the sheet on which Peter said we can mention that in response to it. Peter also noted that this looks like an invalid objection but it would be nice to include a response in the Document.

  • The third objection pointed out that a specific person in the guild gets significantly more rewards than other members and this can damage decentralization. It also talked about the Wiki site replicating the Ambassador program Gitbook. Slate noted that most of the tasks are administrative tasks and going forward we’ve decided to rotate the tasks. Vasu asked who is going to the Dework admin task. Slate noted that the conversation will be held separately from the meeting. For the Wiki site, Slate noted that the project was created mainly for the Ambassador program and it’s not aimed at replicating the Gitbook. Vasu added that we’re not using it again because of lack of funds and it costs a lot.

  • We looked at the Assessor role for the Certification program. Slate noted that he has created a document for the assessor role and he asked the assessors to go through it to give their feedback.

  • Kenechi gave an update on the presentation series - he noted that Zalfred will soon be completing a module. He also noted that module #3 of the series isn’t as fast as the other two modules and module #4 is the final module.

  • Slate asked the Assessors to drop their email address so that he could give them access to the document.

Action Items:

  • [action] Slate to create a document that will give answers to each objection to the budget. [assignee] Slate [due] 20 June 2025 [status] todo

Keywords/tags:

  • topics covered: Project Updates, Presentation Series, Wiki website, CCCP, Assessor, Consent Process, Objections, Portfolio, Ambassador GitBook, Q3 2025 budget

  • emotions: Interactive, detailed, collaborative, educative

Research and Development Guild

Agenda Items:

  • Welcoming new Members and Introduction

  • Review of last meeting summary Action Items

  • UPDATE STATUS ON DEVELOPMENT: Governance Dashboard, W3CD-Web3-Contributors-Dashboard, CSDB-Collaboration-Skills-Database, Social-Media-Dashboard, Reputation-System-using-SoulBound-Tokens-SBTs

  • Review of Objections

  • AOB Open discussions

Discussion Points:

  • Review of last meeting summary Action Items: Lordkizzy went through the action items for the previous meeting and we had a brief introduction for new members.

  • UPDATE STATUS ON DEVELOPMENT: SEO Research: Lordkizzy presented the initial ideation document for the SEO Research, highlighting the scope, objective, timeline, and milestone deliverables.

  • Q2 RETHROSPECTIVE: Guillermo pointed out some issues with project management were a significant focus, particularly regarding the accessibility of R&D projects on GitHub and the lack of follow-through on deliverables. Slate raised concerns about the status of the social media dashboard project, which was anticipated to be functional but had not yet been delivered. Advanceameyaw provided an update on project deliverables, noting completed deployments and testing phases while also addressing challenges such as database integration and the need for a sustainable approach. Guillermo pointed out that a lack team members responses was hindering progress, lordkizzy also highlighted the importance of learning from past failures and adapting to resource constraints.

  • UPDATE STATUS ON DEVELOPMENT: Governance Dashboard: Guillermo provided updates on the governance dashboard and the progress of proposal templates and the voting process, indicating that current drafts are subject to revision. Colleen suggested the inclusion of a change in the voting mechanism from consent to quorum, which Guillermo acknowledged as necessary.

  • UPDATE STATUS ON DEVELOPMENT: Events Scheduling and Notification system: Tevo provided an update on the development of a mobile application designed to improve the Events Scheduling and Notification system. he noted delays in submission and review processes, as well as the need for better integration with existing infrastructure. Guillermo highlighted the necessity of integrating communication tools with the governance dashboard to facilitate decision-making. He shared a detailed timeline and milestone deliverables

  • UPDATE STATUS ON DEVELOPMENT: Legacy: Kenichi reported progress on smart contracts, specifically focusing on matching metadata to the existing database from treasury and Dework. He has developed an admin dashboard that enables super admins to manage stewards and their roles. Currently, he is working on integrating Discord authentication to link user profiles with Dework tasks.

  • R&D Initiatives (Metta Code Labs) and Budget Concerns Guillermo Lucero Funes shared that he made a presentation of MetaCodersLab at Deep Funding TH with Opencog, garnering great reviews, which led to an invitation to propose an ideation RFP. He emphasized the need for collaboration on this initiative and announced a focus on a retrospective session in two weeks to discuss project progress and concerns. Lordkizzy raised concerns about the guild's direction and suggested that unresolved issues need to be addressed in future meetings.

Action Items:

  • [action] Guillermo Lucero will schedule a call with Andre to discuss the integration of the Governance dashboard with the Archive summary tool. [assignee] guillermolucero [due] 2 July 2025 [status] in progress

  • [action] Advance to check if Kenichi's AWS account can be used for the project deployment. [assignee] advanceameyaw [due] 2 July 2025 [status] todo

  • [action] AJ will reach out to the deep funding team to gather feedback on the beta phase of the project. [assignee] AJ [due] 2 July 2025 [status] todo

  • [action] Kenichi to share a visual presentation highlighting the progress updates on the R&D Channel [assignee] kenichi [due] 2 July 2025 [status] todo

Keywords/tags:

  • topics covered: CSDB, Proposals, SEO, Governance Dashboard, Status update

  • emotions: Collaborative, Understanding., Satisfaction

Thursday 19th June 2025

Governance Workgroup

Narrative:

The meeting focused on refining the process of tracking and addressing objections, especially recurring concerns raised by contributors across multiple quarters. One key agenda item was the preparation of the second round consent form, which will be modeled after the previous quarter’s version to ensure consistency.

We also revisited a pending discussion on disparities between high and lower earners in the community. Regarding the plan to create a Google Form to gather input from lower earners, Peter suggested discussing the form questions asynchronously in the Discord channel before proceeding.

The conversation then shifted to how objections can be effectively tracked and followed up on from quarter to quarter. Vani pointed out a recurring pattern where certain issues are raised repeatedly in one quarter, disappear the next, and then resurface later. She emphasized the importance of systematically tracking these objections and shared several documents from previous quarters that attempt to capture this feedback.

On objection tracking, Vani referenced several past documents and retrospectives and emphasized the need to collate them. She proposed a spreadsheet organized workgroup by workgroup and quarter by quarter, though acknowledged the effort required. She noted that it was necessary to extract relevant working document from our documentation archives. The idea of using a spreadsheet to track objections over time was also proposed, with Peter suggesting the governance dashboard for streamlined tracking.

We continued an ongoing discussion about how to address work group outputs that may not meet quality standards. The conversation focused on creating a safe and constructive environment for feedback, separate from the formal budget consent process. The group agreed that linking feedback on work quality directly to budget approval can make work groups feel their funding is under threat, which may discourage open dialogue. Love highlighted the lack of a neutral space for offering feedback, while Guillermo shared that he has found external insights valuable in improving his work. Vani reiterated the previously raised idea of implementing a quarterly anonymous feedback mechanism to encourage honest, depersonalized input.

The group then discussed the importance of setting guidelines for constructive feedback. Love emphasized that feedback should focus on the effectiveness of contributions, not minor imperfections. Vani agreed, cautioning against nitpicking and encouraging feedback that connects to prior concerns or repeated patterns.

Finally, Vani highlighted the importance of understanding the value of the work group's initiatives, noting that many comments have emerged during the consent process. Love supported the idea of soliciting feedback from a broader audience, while Guillermo suggested incorporating this feedback into the sentiment analysis they are refurbishing. They agreed to craft a post to gather opinions before finalizing any decisions

Discussion Points:

  • Tracking objections raised in the consent process

  • Creating spaces to raise issues around the quality of WGs' work, outside the consent process and not related to budget approval

Decision Items:

  • Governance dashboard integration plan presented; includes historical data analytics capabilities with a demo scheduled for June 24th.

    • [effect] mayAffectOtherPeople

  • A new survey targeting lower earners will be created. It is expected to be completed by the end of the quarter.

    • [effect] mayAffectOtherPeople

Action Items:

  • [action] Prep the second round consent form [assignee] CallyFromAuron [due] 20 June 2025 [status] done

  • [action] Drafting the google form to gather views on earnings from lower earners in the community [assignee] PeterE [due] 30 June 2025 [status] todo

  • [action] Create retrospective summary document for next quarter retrospective process [assignee] LadyTempestt [due] 30 June 2025 [status] todo

  • [action] The team agreed to create and share a public post to gather these perspectives before moving forward [due] 25 June 2025 [status] todo

Keywords/tags:

  • topics covered: Objections, Consent Process, Working documents, Consent dashboard, Anonymity

  • emotions: Determined, forward-looking

AI Sandbox/Think-tank

In this meeting we discussed:

  • Introduction and welcome: The meeting kicked off with Lordkizzy welcoming everyone and Osmium shared the agenda for the day’s meeting

  • Discussion on Agenda item: Osmium displayed the Agenda item and he proposed that we have discussions and debate on it.

  • Should open source AI be considered a digital right or a privilege?

Kizzy noted that he's not quite familar with rights but he could give context to it. He mentioned that he doesn't feel AI should get rights over digital content it creates; he feels that the rights should be given to the creators of the AI. He noted that it shouldn't be a right but a privilege.

Vani in response noted that access to open-source AI should go beyond simply having the right to use it, it must also include explainability, and proper education on HOW to use it. People need to understand how AI works, especially that chatbots don’t think or feel like humans, despite sounding like they do. Without this understanding, users can easily be misled. Therefore, comprehensive, accessible information should be provided, not only for using AI tools but also for building and critically engaging with them.

Vasu agrees that open-source AI is valuable, but expressed concern over the lack of clear regulations. He drew a parallel to how China initially allowed Bitcoin mining but later imposed restrictions due to concerns over digital autonomy. Similarly, he believes there should be thoughtful regulation around AI use to ensure it is applied responsibly and doesn’t lead to unintended consequences.

  • Who gets to decide which AI models are safe enough to be public?

Lordkizzy points out that there is currently no central authority regulating the release of AI models, leaving safety judgments to the users themselves. He argues that introducing strict regulations could slow down technological progress, especially for open-source AI, where public feedback is essential for development. Developers rely on real-world testing to improve their models, and too many restrictions might block that process. In his view, the public should continue to play the key role in determining what is safe or harmful. He highlighted the variety of AI models available and emphasizes that users like video editors should be free to choose what works best for them. He warns that government regulation could disproportionately harm small or independent developers who lack the resources of large organizations. Centralized regulation might favor big players, stifling innovation from smaller creators. Therefore, he believes the decision of what AI tools are safe and useful should rest with the users, not a central authority.

UknowZork raised a question in chats: "How do we ensure global safety when AI is borderless?". And she stated that developers should try to prove their models are safe before release a sentiment that aligns with growing global concerns over AI's unpredictable reach and influence.

Vani reflected that while it's unclear who should ultimately decide AI safety, any decision depends on developers being transparent about the risks and implications of their models. Developers may not always consider ethical impacts, either unintentionally or due to their focus on technical progress. To address this, she suggests establishing clear protocols that outline what safety-related information must be provided to enable informed assessments by users or regulators.

Vasu expressed concerns that big players like OpenAI make critical decisions behind closed doors without transparent or inclusive mechanisms, which undermines public trust.

Advance proposed that each continent should establish its own representative body made up of informed users and stakeholders who understand AI. Drawing from examples like U.S. initiatives where CEOs, economists, lawyers, and business leaders are consulted, he suggests these diverse perspectives are essential for shaping fair and democratic AI policies. Such bodies could ensure that global AI governance reflects regional values and priorities while remaining inclusive and future-focused.

  • Can decentralized ecosystems like SingularityNET provide a safer, scalable alternative to centralized AI?:

Lordkizzy reflected on past comparisons in AI Sandbox sessions, where outputs and capabilities of different AI models were assessed. He explained that the foundation of projects like Ocean Protocol is rooted in the mission to challenge dominant, closed AI companies like OpenAI. While acknowledging the difficulty due to limited resources compared to big firms, he believes the broader goal of these open communities is to push the boundaries of AI advancement and scalability through collective innovation.

Vasu explained that open-source AI drives rapid innovation through community collaboration—much like Linux where issues are quickly fixed and systems evolve together. Unlike closed, profit-driven models, open AI like LLM let people tailor technology to their cultures and values, sparking new AI economies. But this openness also brings challenges around governance and accountability.

Santos expressed concern about the safety and accountability of decentralized AI systems. While centralized AI models may currently seem safer because responsibility lies with a specific company, decentralized models create ambiguity, especially when something goes wrong. He questioned who should be held accountable if, for instance, a harmful AI application is built on an open platform like ASI Mini. This highlights a core tension: decentralized AI promotes openness and innovation but currently lacks the clear accountability structures found in centralized systems.

Vani reflected on a cultural shift where people may soon assume that most media - photos, videos, news - is AI-generated and likely false. This challenges traditional ideas of harm, which often hinge on whether something is factually true. She suggested that as believability declines, the focus may shift to assessing the vulnerability of those impacted, rather than just the content’s accuracy. For example, even AI-generated abuse imagery is still deeply harmful, despite not being “real,” showing that intent and impact can matter more than truth in determining harm.

Vasu agreed with Vani that people are becoming accustomed to AI-generated content, but he believes society is also responding by seeking authenticity. He points to emerging social media platforms like BeReal and Mindplex, which emphasize posting real, minimally AI-altered content. These platforms often include factual verification mechanisms, offering a counterbalance to the flood of synthetic media and helping rebuild trust through genuine, verifiable interactions.

  • What parallels can we draw between open-source software movements and open AI?

Vani noted that there's no government deciding on how open source software is regulated - it's something that developers do themselves in a decentralised way. She suggested that maybe OpenAI can learn from that.

Vasu added that while open-source software like Linux is built on editable code layers, AI models are fundamentally different due to their probabilistic nature and complexity. Unlike traditional software, modifying or retraining AI models requires significant resources, expertise, and access, making them less accessible for community-driven development. AI tools integrate advanced capabilities and often need specialized deployment setups (e.g., APIs, tele-sensing), making the open-source AI landscape harder to manage and govern than classic software projects.

  • Would granting access to AI as a right reduce global inequality—or widen the misuse gap?

Lordkizzy argued that formally granting rights or imposing regulations on AI use is unlikely to increase global inequality or the misuse gap. Currently, AI is accessible without much restriction, and both beneficial and harmful uses already coexist. In his view, introducing rights or structured access might actually help curb misuse by creating clearer guidelines and responsibilities, leading to more ethical and balanced use of AI across different regions and user groups.

Vasu added that AI misuse is inevitable, but rather than letting that dominate the conversation, the focus should be on establishing governance that is thoughtful yet non-restrictive. He suggests moving toward a shared understanding (“mutual deposit”) on the issue, and emphasizes that any regulatory framework should protect freedoms and encourage innovation, rather than impose rigid controls that could hinder progress.

Santos noted that granting rights to AI access could have both positive and negative effects. On one hand, global inequality may worsen if some countries embrace open-source AI while others, like North Korea, refuse it, leading to uneven access. On the other hand, misuse is unavoidable, especially given AI’s internet-based nature. Drawing a parallel to the dark web, he noted that lack of control can turn promising technologies into harmful ones, reinforcing the need for thoughtful oversight without expecting to eliminate all risks.

Vani argued that the potential for misuse shouldn't determine whether people are granted rights, just as we wouldn’t deny voting rights to marginalized groups based on how they might vote. She challenged the idea that marginalized users are more likely to misuse AI, suggesting instead that those in power and privilege are more prone to unethical behavior to protect their status. Ultimately, she viewed the debate over misuse as a distraction from the more important principle: rights are granted because they are just, not because they’re risk-free.

  • Can open-source AI be governed ethically at a global scale without censorship?

Santos emphasized that censorship is unavoidable when AI is deployed on a global scale. He used TikTok as an example to explain how censorship operates—even in widely used platforms—where certain language or content is restricted. He believes that similarly, open-source AI will face censorship if governed by a central authority or higher power like a government. If the content or capabilities of an AI model are deemed unfit for public use, that authority will likely restrict or suppress it, making full openness difficult to maintain under global oversight. He suggested that because different regions have varying norms and sensitivities, governing bodies will always exist to filter or restrict certain uses, making total openness difficult in practice.

Action Items:

  • [action] Jeffrey is to document the next meeting session [assignee] Jeffrey Ndarake [due] 26 June 2025 [status] todo

  • [action] Lordkizzy will prepare a plan for a debate next week based on the discussion points. [assignee] LordKizzy [due] 19 June 2025 [status] todo

Keywords/tags:

  • topics covered: API, AI Models, Open source, Misuse gap, Right, Privilege, Modules, Safety, Discussion/Debate Session, Global inequality

  • emotions: Interactive, Educational, Productive, Informative, Discursive

Friday 20th June 2025

Video Workgroup

Discussion Points:

  • Budget Approval Update from Kizzy

Team did not receive approval for the consensual round due to two objections regarding lack of participation in work group updates and high budget allocation for content with low engagement Kizzy explained that the objections are considered invalid as there are no regulations requiring attendance at updates Decision made to give consent in the next round to ensure budget approval for the quarter

  • Social Media and Engagement Challenges

Tuso highlighted significant engagement issues on current social media pages and emphasized the need for verified accounts to increase visibility Discussion around account stability affecting content quality and reach Team exploring platforms like TikTok and Instagram to boost social media engagement Need identified for community engagement before investing in influencer marketing partnerships

  • Funding and Structural Concerns

Tuso raised concerns about current funding limitations and need for better program structure Discussion on the importance of proper collaboration and adequate funding to achieve project goals Focus agreed on securing budget approval to ensure funds are available for quarterly operations

  • Task Distribution

Social Media Manager: Andrew (confirmed due to effective current management and platform access) Documenter: Devon (assigned to document next session) Task Manager: Devo Town Hall Edit: Subzero

  • Updates on Recent Activities

City event update provided by Kizzy noting poor participation and issues with task link management Discussion on reduced allocation for city events and need for better collaboration with other projects Call scheduled to discuss further improvements to city event management

Decision Items:

  • Continue exploring TikTok and Instagram

    • [rationale] for increased engagement

    • [effect] affectsOnlyThisWorkgroup

  • Focus on community engagement strategies before influencer partnerships

    • [effect] affectsOnlyThisWorkgroup

Action Items:

  • [action] Team to give consent in next round for budget approval [assignee] all [due] 30 June 2025 [status] todo

  • [action] Devon to document next session [assignee] devon [due] 27 June 2025 [status] todo

Keywords/tags:

  • topics covered: Budget Approval Process , social media, Engagement, Q3 2025 budget, low token price, structure of WG, Collaboration, task assignment, lack of funding, TikTok

  • emotions: Need for Clarity

Saturday 21st June 2025

Gamers Guild

Discussion Points:

  • Slate explained the Guild's mentoring model and development scope, including asset creation, API integration, and AI behavior scripting in Roblox to new member Onfroy.

  • Review of objections to Q3 2-25 budget: Objection 1: "No clear outcome, reserves not explained." ➤ Countered by referencing the Q2 2025 report and Dework tasks showing reserves allocated for in-progress items (e.g., Writers Guild tasks, R&D support, asset payouts).

Objection 2: "One person earns too much. What's the point of the project?" ➤ Slate’s involvement is due to specialized development work. ➤ Guild tasks are open and skill-based; low participation often leaves technical tasks to Slate. ➤ Members backed this with examples and suggested greater visibility and shared responsibilities. ➤ Kizzy proposed onboarding more people to learn scripting/UI to balance task distribution.

  • Guild Purpose Clarification:

Gamers Guild’s goal isn’t just game-making but: Visualizing and showcasing Ambassador Program services/tools. Bridging SNET and broader audiences through platforms like Roblox. Integrating API, budget dashboards, AI models, etc. The game acts as an interactive mirror of SNET ecosystem, not just a play space. Roblox is treated as a media layer, similar to YouTube but with interaction.

  • Suggestions to decentralize tasks:

Assign easier UI design tasks to members while Slate handles backend logic. Continue offering mentorship for scripting and building. Use asset creation and department building as open tasks to build skill/participation.

Action Items:

  • [action] Map creators [assignee] Gorga Siagian, Kateri [due] 28 June 2025 [status] todo

  • [action] Department Creation - Writers Workgroup [assignee] devon, LordKizzy [due] 28 June 2025 [status] in progress

Game Rules:

No games played

Keywords/tags:

  • topics covered: Q2 2025 quarterly report, Decentralisation, Workgroup Goals, Consent process, Budget objections, Q3 2025 budget

  • emotions: Collaborative, forward-looking, Collaboration

Last updated