Comment on page

Gooey.AI's Open Source Vision

Shared AI Workflows as Public Innovation Infrastructure
by Sean Blagsvedt - Founder, Gooey.AI


How does every org become an AI organization (so they don’t get displaced by another org that does)? How can we leverage the constant advances in both open and private AI models, making them cheaply available and ready for impact evaluation in an organization’s specific use case? How do organizations discover and apply the hard-won AI lessons of their field’s peers to their own problems?
We propose that an open source AI orchestration and hosting service would greatly accelerate the deployment, iteration and testing of AI-based solutions for enterprises and development organizations. The system should expose most of the world’s prominent public and paid AI services; provide a hosting infrastructure to run open source AI models; foster an ever-growing collection of simple, reusable AI workflows; provide meta-tools such as feedback, LLM analysis and billing systems; and be both hosted as a reliable cloud service or available to run on private, enterprise or government compute systems. With such an infrastructure, one-off AI investments become new assets for the public to reuse, thereby enhancing the speed of innovation.
Video of Gooey.AI Founder explaining the above diagram.
“If you get into this (AI) space, the most important thing is that you share what you are learning.”


“If you get into this (AI) space, the most important thing is that you share what you are learning.”
Simon Willison, Creator of Django from Catching up on the weird world of LLMs
We must improve the innovation infrastructure of human beings, to survive the climate crisis and to solve virtually any problem we imagine. In letting each of us more efficiently build on the work of each other, we increase the leverage of our collective efforts. This is the theory of change we employ and how we hope to accelerate progress by improving the innovation infrastructure of all organizations - including development organizations - who wish to leverage AI.

Why now?

“It’s not that AI will replace lawyers; it’s that the lawyers who use AI will replace those that don’t.” - Superlegal
“Every business will become an AI business.” -Satya Nadella
It is clear to us and many others that the productivity enhancements made possible first with software - with its feature that humans can reuse and modify prior investments at near- zero marginal cost - and now modern AI tools such as OpenAI’s ChatGPT will likely cause a transformation in how most processes in organizations function. We go further with the belief that the SuperLegal adage above will apply to almost every organization and job function; namely that those organizations and people that best leverage AI - as a super-set of all reusable collective human work and knowledge - will outperform those that do not. As Bill Gates stated in his April 2023 memo:
The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.
But how can we help people and organizations make this transition to a hyper-competitive and productive world? What tools do we need when “thinking” jobs are ones for AI prompt writers - trying to wrangle the AI to our desires, and/or API stitchers - connecting non-obvious or custom sets of data and functionality to build novel and useful new things? How do we specifically help organizations - such as development organizations - learn from each other’s investments?
“Every business will become an AI business.”

Stories of AI

To better understand why an infrastructure like Gooey is needed, we need to understand the needs of organizations that could benefit from it. Here we present three organizations, who are all attempting to build AI chatbots.

Digital Green

Digital Green is an NGO that helps over 4 million smallholder farmers increase their productivity. For the last 20 years, they have sought out and filmed best practices from these farmers and then distributed their expertise through village-level screenings and more recently, 80M+ YouTube views. They have recorded over 10,000 videos and operate in 10 states in India, Ethiopia and Kenya. Like many established organizations, they’ve created an incredible repository of wisdom. Ideally, we’d like this wisdom to be available instantly to every farmer and the extension agents who help mentor farmers and convince them to take a risk on a new best practice and/or grow a more climate change resistant crop. Such a solution would ideally cost almost nothing on a per user basis, be available whenever the agent or farmer needed help, incorporate all of the latest agricultural research, science and real-time weather and soil sensor data, give fluent advice in any language, dialect or literacy level and have a robust feedback system so we could measure the quality of its advice, whether it was being followed and how much it impacted farmers.


Z is a US startup that provides home repair and HVAC services via technicians. They too have collected a large repository of institutional wisdom in the form of vetted, relevant training videos, repair decision trees and hundreds of manuals. Their aim is to make this wisdom available in multiple languages via speech interfaces via slack, so their technicians can immediately address hard problems in the field.

Noora Health

Noora Health works in 400 hospitals across India, Indonesia and Bangladesh and provides training for families who are just leaving the hospital after giving birth or undergoing a surgery. They built a successful BPO of nurses and doctors who provide WhatsApp based advice to family members and patients on a wide variety of topics. They’ve answered over 30,000 questions (about 200 / day) but now want to scale their services at near-zero marginal cost to 70M people.
Investing in AI to provide better & cheaper and scalable services in healthcare

Their AI Shared Problems

Each of these organizations is investing in LLMs and AI chatbots to provide better & cheaper services for their users. This process today is difficult and expensive on several fronts.
  1. 1.
    Technology understanding: There is an ever increasing set of AI technologies that organizations could leverage to solve their problem. DigitalGreen, Z and Noora have all hence tasked their senior engineers to explore the potential solution space - including LLMs like OpenAI’s GPT, vectorDBs for storing larger knowledge bases, translation, speech recognition and text-to-speech AI models to support low-resource language users. But the AI space is vast, constantly changing and growing in capability every day as both the largest tech companies and best funded startups release new tools. It’s incredibly difficult to keep up with these innovations, let alone understand their relative trade-offs without actually building a prototype. Hence, DigitalGreen, Z and Noora all funded internal prototyping projects - often stretching into months. These prototypes often require technical AI knowledge and these developers are among the most expensive right now.
  2. 2.
    Time to test: Given the complexity and newness of these systems, prototypes that are testable with real users often take months to build.
  3. 3.
    Cost of deployment: AI chat prototypes that use just an LLM such as GPT-3.5 are fairly cheap to deploy today but also have limited functionality. For example, if orgs want to employ a fine-tuned, open source AI model that promises to offer better speech recognition in a low-resource language, these models often require the most expensive computers available today. For example, Gooey rents these machines (e.g. 80GB A100 GPUs) from Google for approximately $12,000 per computer per month. Running these efficiently then requires more specialized and very expensive Dev and MLOps engineers.
  4. 4.
    Cost to prove user value: Simply building an AI demo or prototype doesn’t prove it’s useful to help users achieve their goals. Hence, feedback, cohort and usage analysis systems must be also created and then the prototype must be deployed at reasonable scale inside apps or via integrations such as WhatsApp or Slack.
As we can see with just the chat use cases here, building viable systems with proven user value is hard and expensive and we believe a platform like Gooey can help.
Building viable AI systems with proven user value with Gooey

Platform Principles

Together with our influences (see Appendix), we hold these principles in mind as we design the platform.
  1. 1.
    Learn from others
Most organizations don’t have or can’t afford AI researchers on their staff but they could certainly benefit from knowing which AI initiatives of their peers are working best and ideally, can apply those initiatives quickly and cheaply to their own particular domains.
  1. 2.
    Keep Abstracting
We can offer the greatest leverage to our customers by building on top of the constantly expanding foundational AI ecosystem. All of the tech biggest players - MSFT/OpenAI, Google, AWS - are competing for developers to integrate with their respective technologies while the open source is also constantly releasing new innovations. With Gooey, it should be easy for organizations to build on the best of private + open source models. Furthermore, as new innovations are made available in the market, organizations should be able to “swap” in the latest technology components and compare their relative price vs performance, without large up-front investments to experiment or deploy a potentially a game-changing model or API.
  1. 3.
    Encourage Sharing + Reuse
Libraries, the scientific peer review system, GitHub, open source and the Mosaic browser’s “View Page Source” all enhanced learning and innovation ecosystems by encouraging innovators to share their work and for viewers to understand it deeply and quickly. Hence, like GitHub, Gooey.AI workflows are public by default for others to discover and reuse them as we attempt to grow the ecosystem of creators building and sharing AI workflows. Being the website where great AI workflows are discovered increases our network value and hence, encourages more creators to join and strengthen the ecosystem.
  1. 4.
    Include Everyone - especially non-coders + non-English speakers
For organizations that work with marginal or non-English populations, they must be able to quickly run and assess the effectiveness of tools in resource-poor languages, often spoken by millions (not billions) of people whose documents do not dominate the content of the Internet. We will facilitate this by making the private and public models & APIs for low-resource languages available with numerous examples of how others use them and evaluation frameworks for organizations to easily benchmark which models perform best for their particular users’ data sets.

The Proposal

An open source API orchestration layer of simple, shared workflows, with unified billing to access the entire AI universe.

Orchestration Capabilities

(Starting from the diagram’s center, then clockwise starting at 11 o'clock)
  • These are the core metaphor of Gooey - small collections of LLM prompts that weave functionality together via API calls.
  • Each workflow has a credit cost, used to pay for the API calls that a given workflow runs.
  • Apps (including applications created by client organizations)
    • Via our APIs, orgs can expose workflow functionality in their own applications, websites, etc. This allows them to deploy and/or white-label any workflow as part of their own solution.
Communication platforms
  • We support WhatsApp, Facebook, Instagram, Slack, IVR/Telephony and embeddable Web widgets today as communication platforms, with Telegram and Discord support expected in Q4 2023.
The Gooey.AI Website
  • Public workflows and examples are shared and showcased
  • Any workflow can be immediately altered and run on the site or via an API call
  • Each new run is provisioned a new public (but obscure) URL (like enabling easy collaboration
  • Future: Up / Down votes, better search, trending workflows, top creators, etc
Shared Workflow Services
  • These services supplement the value of workflows and today include:
    • Automated, comparative workflow analysis, enabling orgs to swap in new models or make changes to their workflows and immediately re-assess, compare and score outputs
    • Retention, usage analytics and charts for all users of the /copilot workflow
    • Built in feedback, translation and audio services on most communication platforms
    • LLM analysis for conversations, used to create structured categories, data or JSON from unstructured conversations between the user and the copilot.
    • The ability to push structured data back to data stores of apps
    • Connections to dashboards and external data stores enable orgs to understand usage
  • Future: Create and edit workflows with natural language
Model Abstraction and Unified Billing
  • We abstract the most popular AI models and make them “hot-swappable” depending on the particular needs of each workflow
  • Each model has a per-API call fee which is deducted from the user’s credits when they run a workflow
Paid / Private APIs
  • Gooey buys a key and then deducts credits from the workflow caller’s account
  • New Paid APIs can be made available for experimentation and integration of private non- source partners
  • Future: Organizations can use their own paid private keys (e.g. OpenAI keys)
Open source AI models
  • Gooey can host and dynamically scale any open source AI model on our cluster of A100 GPUs
  • Culture specific models: Models that enable fine-tuned speech recognition + translation can be hosted and made available with fast execution for near real-time chat applications
  • Future: For organizations with their own compute resources, they’ll be able to host the Gooey AI model orchestration code on their own machines.
  • Future: Developers can contribute code to host new models
Inbuilt Data providers
  • YouTube Transcription - we can take a collection and/or playlist of YouTube URLs, transcribe, translate and then run an LLM prompt over the transcript to extract synthetic data such as FAQs. This is the process we used to make DigitalGreen’s video library useful and accessible in Farmer.CHAT.
  • SERP / Google result lookups. This gives any workflow the power to search the Internet and optionally pull matching pages into vector DBs to be analyzed in real-time by LLM prompts.
  • Google Drive - we can connect and authenticate to any google drive link.
  • AI OCR providers - often important knowledge exists in old scanned PDFs, filled with complicated tables and visuals. We’ve built in advanced OCR AI providers to transform complex documents into well parsed tables and structured sheets for inclusion in vectorDBs so the LLM can correctly reason over their data.
Profile data stores
  • Especially for chat applications, apps will want to seed interactions with a user before a chat begins - e.g. if DigitalGreen knows the location of a farmer from their phone number, we can fetch this data to better inform the LLM prompt to advise the farmer with location specific information..
  • Soon, we’ll enable workflow authors to easily pull data from an external source (via OpenAI Functions as described below) and insert the returned data into its LLM prompt.
  • Additionally, we’ll enable workflows to push data back to Profile data stores too. E.g. If the user mentions their location in a conversation, the organization should be able to easily deploy a simple script that pushes that particular user’s location data back to their own profile data store.
  • Example Data Stores: FarmStack, Sunbird
Any Data Source (via OpenAI Functions)
  • Workflows need the ability to read external data sources in real time. For example, in order to properly give advice on how a Bihar farmer should plant his crop, a weather forecast is often crucial.
  • We see OpenAI’s Functions becoming an industry standard for how LLMs can selectively integrate data from any source, with Function support being the basis of ChatGPT and Bing Extensions. Hence, we plan to implement this protocol as a standardized and re-usable method to integrate external data sources.

The Farmer.CHAT Use Case

We’ve partnered with DigitalGreen to integrate 400+ of their videos, 100+ documents and best practice FAQs to create a multi-lingual, audio-capable WhatsApp chatbot for farmers and the agriculture extension agents who are employed by the government to mentor farmers.
To solve the problem of how to make the vetted documents, URLs and videos of DigitalGreen accessible in local languages to farmer extension agents, we’ve been evolving the workflow. By building this as a re-usable recipe, it’s allowed us to leverage the advancements we added to it since April 2023 (e.g. feedback mechanisms, conversation analysis, conversational summarization to improve vector DB searches, synthetic data creation from video transcripts, etc) and extend those new features to expand Farmer.CHAT to 5 geographies. The work to expand to a new geography consists of:
  1. 1.
    Update the knowledge base documents
  2. 2.
    Gather “Golden” questions and answers that act as the dataset to measure whether changes to the workflow actually improve its output
  3. 3.
    Updating the conversational analysis prompts
  4. 4.
    Setting the language
  5. 5.
    Provisioning new WhatsApp numbers
  6. 6.
    Evaluate feedback from users and iterate the knowledge documents
This process is repeated as move to additional domains and build new copilots in entirely different fields; the process has changed from one focused on coding new features to one focused on content curation, usability testing, impact measurement and iteration with users to make a valuable service. Here’s initial feedback from the first 100 users of Farmer.CHAT.

Evidence of Gooey.AI Traction

Much of what’s been described above is available on Gooey.AI today. For example, Gooey.AI users can:
  • Use the hosted instance and immediately tweak and iterate an existing workflows
  • Save their LLM prompt, document and model parameters as reusable, shareable urls for others
  • Call all workflows as via REST APIs, meaning organizations can integrate or whitelabel any service into their own apps.
  • Leverage Search Serp (i.e. the ability to search the web), YouTube videos (and the ability to transcribe them), Google Docs as data services
  • Run analytics, feedback and conversation analysis on /copilot
  • Connect /copilot workflows to a WhatsApp, Slack, Facebook or Instagram
  • Access 20+ workflows and their example uses on
Furthermore, the market appears to be reacting positively to our hypothesis that it’s compelling to find and fork AI workflows; we’d had ~190,000 unique users since the start of 2023 and ~900,000 workflow runs. We’ve had over a dozen development organizations approach us in the last quarter to use our AI workflows: DigitalGreen, NooraHealth, IPRD Solutions, Quicksand, TheNudge, AllIn, PrecisionDevelopment, Jhatkaa, plus many private sector clients including, Zephyr and MyHeritage. These organizational engagements range from 10s to 100s of thousands of dollars in both consulting and API level revenue to Gooey.AI.

Transition to an Open Source Ecosystem

Several of our clients have specific open source asks:
  1. 1.
    Run the Gooey workflow business logic on their own servers, with the ability to inspect and edit the code like other open source projects.
  2. 2.
    Exclusively use open source AI models (rather sending their user data to Google, OpenAI or other private companies for processing)
  3. 3.
    Host workflows - and the open source AI models they depend on - on their own computing infrastructure. This is vital to governments as they consider scaling these services to 100s of millions.
  4. 4.
    Use their own keys for private paid API calls (while continuing to optionally use the Gooey service to provide unified billing to models and services for which they don’t have private API keys)
  5. 5.
    Contribute new code modules to host new open source hosted models or connect to paid API services
  6. 6.
    Contribute new data service connectors eg look up weather before running a script
Fulfilling these open source requests presents significant risk to Gooey.AI as an organization, given that our ability to charge for our workflows and hosting infrastructure will be affected. In particular, much of our revenue today comes from enterprises running workflows in the cloud on Gooey.AI and once we are open source, those organizations could choose to download their particular workflows, use their own direct API keys and/or run their own GPUs and pay nothing to us.
That said, we believe the ecosystem benefits of open source could be extremely worthwhile and that we can navigate the business risk, albeit with support. Building a thriving open source community would by definition imply more organizations using and contributing to the Gooey.AI codebase and should create more opportunities for our strategic consulting business and cloud hosting business.

Open source Milestones

  1. 1.
    With sufficient support, Gooey makes a public commitment to become open source.
  2. 2.
    Gooey workflows can be hosted on organizations’ own servers by downloading their workflow and our orchestration runtime.
  3. 3.
    Our analytics DB and visualization tools are open-sourced and locally hostable.
  4. 4.
    Orgs (such as governments) can host large open-source AI models on their own GPUs
  5. 5.
    Orgs can specify their own paid API keys and call Gooey’s AI abstraction cloud service if they lack any key.
  6. 6.
    Other orgs can contribute code to the codebase e.g. create a communication service adaptor to connect our copilot workflows to a Kenyan IVR solution such as

Open Source Usage Patterns

Just as GitHub hosts both public and private code repositories on, we expect the public discovery workflow experience to remain on our website. However, our core workflow runtime and AI model orchestration cluster will be open sourced.
For example, if you want to see how others in your field are using AI, and then discover and modify their workflows, the Gooey.AI website will be your goto destination. Once there, you can choose to keep your workflows hosted on Gooey.AI or choose to download the workflow’s prompts, settings and code to your own server, along with the orchestration run-time required to execute the workflow and the AI model cluster code if your servers are capable of running it.
Here’s an overview of the primary components:
Open source or Private to Gooey?
Individual workflows
The recipes of LLM prompts, settings and connections among AI models.
Example workflows are publicly viewable, runnable and forkable on Gooey.AI
A user’s workflow may be made public or kept private.
Future: Can be downloaded locally
Workflow Orchestration RunTime
The code required to execute a workflow (e.g. call an LLM with a prompt and send the result to another service)
Future: Open sourced on GitHub
Future: Hostable as a Docker container
AI Model Cluster
The collection of 50-100 open source AI models that must run on fast GPUs (e.g. LLaMA2, Bhasini Speech Recognition)
Future: Open sourced on GitHub
Future: Hostable as a Docker container
Analytics DB
Stores history of workflow runs, /copilot conversation history, visualizes data
Future: Open sourced on GitHub
Future: Included with Docker container
Gooey.AI Workflow Directory
The public collection of workflows and discovery interface
Hosted on Gooey.AI and not expected to be open sourced.
Private AI Services
The platform aggregates many paid AI API services such as:
  1. 1.
    Private AI APIs: OpenAI, Google, Azure, AWS, Replicate, uberduck
  2. 2.
    Communication platforms: Facebook/Instagram/WhatsApp, Slack
  3. 3.
    Other APIs: SearchSERP, Contact Lookup
Future: Orgs will be able to provide their own Private AI API keys rather than using Gooey credits.
Hence organizations can choose to run their workflow in 3 ways:
Useful for
Compute Requirements
Cloud Hosted
Orgs that don’t want to manage servers
To Gooey: ~$.05 -$1.00 per run, depending on workflow
Entirely Org Hosted
Govts + orgs with high security needs
Large GPU cluster to host workflows, runtime, AI model cluster + analytics DB
Optional private AI keys.
To Gooey: No per run fees. Optional consulting/support. A license fee may be required for large, private orgs.
Organizations who to want privately manage their AI workflows but still want to experiment with the latest models.
Moderately capable server to host the workflow runtime.
Private AI keys.
To Gooey: Per run fees whenever a workflow requires a private AI API call for which the org doesn’t have a key. Per API call fee to use our cloud AI model cluster.


  • Development orgs can prototype, test, measure impact and iterate faster at much lower cost
  • The latest AI innovations from the ecosystem get deployed in real systems faster
  • Every org gets to reuse and tweak the best performing AI interventions of others
  • Greater sharing of tech and AI components such that new innovations are published as components in Gooey and then adopted across many of its ecosystem users.
  • Shared measurement infrastructure
  • Insights and data on how research based knowledge is being translated by AI systems into advisory that’s being adopted or not.

How this approach changes how large funders such as BMGF + EkStep invest in tech projects:

An organization wants to create a local-language IVR front-end for ChatGPT.
Imagine a Kenyan NGO proposes to BMGF a great potential innovation - allow non-literate, non-smartphone users to call a phone number, ask any question in their local language and receive back an audio answer, leveraging the OpenAI GPT LLM’s knowledge base via the API that powers ChatGPT.
Under the current funding system, this organization would have to build (or use private models) to create all the components. They would:
  1. 1.
    Get their own API keys to various LLMs like OpenAI
  2. 2.
    Determine which speech recognition and synthesis models worked best for their users and then host that infrastructure on their own GPUs.
  3. 3.
    Build a robust connections among LLMs, their speech recognition/synthesis components and whatever IVR system they choose
  4. 4.
    Build feedback and analysis systems to determine usage and retention patterns and do cohort analysis.
  5. 5.
    The code would likely not be structured for easy reuse by other organizations.
If this were a Gooey ecosystem funded project, the execution would significantly differ.
  1. 1.
    Faster time to validation. By re-using components already in Gooey, the NGO can perform large-scale usability testing faster to determine if a wider roll out is merited. E.g. They could modify the LLM script of, select their user's language and then connect the workflow via an API to WhatsApp or their IVR service.
  2. 2.
    Leveraged investments in components - eg if the org choose to use a new IVR provider e.g. in Kenya, they would not connect IVR system endpoint just to their code but to the Gooey.AI/copilot AI workflow, so that any other user of the /copilot could now also connect to as a communication provider in the future. Importantly, this implies that each BMGF or Ekstep funded tech project ideally ends up growing the collective open source codebase that all future projects can re-use.
  3. 3.
    Rather than coding up another direct API connection to OpenAI as the LLM, they would use our abstracted LLM workflow (in /copilot or /llm), meaning they could easily assess and compare alternative LLMs as they are launched.
  4. 4.
    The code components and LLM scripts are inspectable and reusable in a standardized format for other organization in the ecosystem to learn from and re-use, just as the Farmer.CHAT workflow (including its LLM scripts, WhatsApp connections, knowbase documents and speech recognition models) is publicly viewable and re-usable today.
  5. 5.
    Feedback: By using our /copilot workflow, the organization would get qualitative and 👍🏾 👎🏽 feedback user interface, storage and analysis systems with no additional development cost.
  6. 6.
    Measurement: If the org chooses, they can leverage the standardized conversation database components already available in Gooey. Doing so and then sharing this data would enable comparison of conversational usage patterns across multiple funded conversational projects.
Like Linux or other large scale open source projects, we fully expect similar needs to appear across organizations deploying GenAI solutions. If we can get more organizations building on a shared platform like Gooey, as each organization solves similar problems in code (and pushes those solutions back to a shared code base), the ecosystem’s pace of innovation will accelerate.

How We Measure Success

As an open source digital public good, the Gooey ecosystem will hold itself to the following metrics.
As of Oct 2023
Target Oct 2024
Organizations that deeply integrate Gooey API (measured now as organizations paying >$10,000 for our solutions and later measured by how many pull our code when we push an update each month)
Monthly active developer contributors
Unique users that have run AI workflows since Jan 2023.
AI workflow runs since Jan 2023.
Buyers of Gooey credits



There’s a long history and discipline of ideas, tools and institutions that accelerated innovation - from public education as a societal investment to unlock the intellectual potential of every citizen to open source software. Germain to this project, there are several from which we borrow ideas.
Scientific journals and peer reviewed papers
This practice - at least as old as the Enlightenment - enables public review, critique and learning from the best minds in numerous fields, allowing their successors to build on their work.
The invention of the patent importantly balanced two age-old goals in innovation - namely that sharing a great idea sparks better ones in others BUT in a capitalist society, we still would like incentives for doing the hard work of inventing and then sharing a great idea.
Published, verified results
Verified, truthful results of expensive interventions in health, development and business are tremendously valuable to teach future practitioners in a field which innovations worked, which failed and the infinite degrees of success between them.
View Source on HTML pages
The first Internet browsers contained “View Source” which let every viewer of a webpage to see programming code that created it. These pages could be downloaded, modified and run again to create slightly different versions of the original webpage. Unlike previous versions of software - which were compiled such that their human created source code was hidden - every page of the early Internet was open source. This simple innovation infrastructure tool is arguably among the most important reasons that the public Internet succeeded when every other closed system - like AmericaOnline, Prodigy, etc - failed. Its design inherently created millions of Internet tinkerers.
Digital Public Goods
Aadhaar - the biometric ID system that’s enabled India to give verifiable, authenticated government IDs to over 1 billion residents and with it, a host of other services such as digital cash transfer systems and credit agencies - is a useful innovation infrastructure that’s now available as reusable software components to other countries.
Other examples:
Open Source software, GitHub, JsFiddle, HuggingFace, Civit AI
Last modified 20d ago