Exploring Decentralized AI

By

Chetanya Khandelwal, Kelvin Koh

Nov 30, 2023

In November, OpenAI - the high-profile AI company that created ChatGPT - faced a turbulent period when its board ousted CEO Sam Altman, leading to a series of dramatic events. While the exact reasons for Altman's dismissal remain unclear, reports suggested concerns about the reckless commercialization of OpenAI's AI technology. Altman eventually regained his position as CEO after 95% of the company's employees threatened to leave, revealing widespread dissatisfaction and power struggles within the organization. The leadership crisis at OpenAI has significant implications for businesses relying on GPT-based technologies. Nearly 80% of Fortune 500 companies could have experienced disruptions, highlighting the vulnerability of depending on centralized AI sources.


Furthermore, recent controversy has surrounded their text-to-image model, DALLE-2, due to its broad censorship. For example, DALLE-2 has banned terms like 'execute,' 'attack,' 'Ukraine,' and images of celebrities. Such strict censorship hinders prompts like 'Lebron James attacking the basket' or 'a programmer executing a line of code.' Granting access to the private beta for these models implicitly privileges Western users who have initial access, thereby excluding a significant portion of the global population from interacting with and contributing to these models.


This event sparked discussions among tech experts, industry insiders, and CEOs, including notable figures like Elon Musk and Stability AI's Emad Moshtaque. This is not the way artificial intelligence should be shared: controlled, monitored, and controlled by a few big tech companies. Decisions about AI, such as who benefits from it and who can access it, are made by a handful of people in a few corners of the world. The majority of people affected by this technology do not have a meaningful say in its development. This challenges the core principles of democratizing AI advancements for the greater good. This concentration of power poses systemic risks, stifles diverse innovation, and raises ethical concerns about AI's governance and societal impact.


Open-source AI models led by Meta, StabilityAI, and HuggingFace, bridge gaps left by closed-source models. They enhance transparency, fairness, customization, and accessibility, achieving milestones with efficiency and privacy. Despite their advancements, open-source models face challenges, particularly in centralization and incentive structures. Platforms like Hugging Face democratize access but retain centralized control over infrastructure and data ownership, limiting fair governance and composability. The absence of direct financial incentives within these platforms further silos efforts, limiting seamless collaboration and impeding knowledge sharing.


The rise of decentralized AI applications in the crypto space is a paradigm shift towards democratization and decentralization, surpassing traditional open-source repositories. These platforms use blockchain networks to distribute control among participants, creating a more democratic and community-driven ecosystem. They use token-based incentives to encourage collaboration and innovation among data providers and model developers, while also promoting community governance.


One notable project in this field is Bittensor. Founded in 2019 by AI researchers Ala Shaabana, Jacob Steeves, and the pseudonymous "Yuma Rao," Bittensor is a decentralized L1 platform based on substrate (similar to Polkadot). It consists of 32 subnets that represent different AI use cases such as text prompt and image generation. Miners, who are AI experts, contribute high-performing models to the network and receive rewards in $TAO tokens. Validators assess and rank these models based on performance and consistency. Incentives drive competition within the ecosystem, with subnets, validators, and miners competing for fixed $TAO emissions. This creates a peer-to-peer intelligence market for permissionless access and AI innovation, improving user and developer experience.


Core to Bittensor's functionalities are:

  1. Knowledge distillation (by Google): Enables collaborative learning among network nodes by transferring knowledge from complex models (teachers) to simpler ones (students), achieving efficient learning without significant performance loss. This allows for smaller models to mimic larger models performance at a fraction of the cost. E.g. Stanford’s Vicuna13B matches GPT performance but costs $300 to train.


  2. Sparsely Gated Mixture of Experts (by Google): Bittensor uses MoE, an architecture comprising multiple specialized sub-models or "experts" that focus on distinct data aspects or problem patterns. These experts are orchestrated by a gating network, resulting in more nuanced predictions and higher accuracy compared to single models. GPT-4’s MoE model is believed to house 16 expert models, each with around 111 billion parameters each.


  3. Yuma Consensus (founded by Bittensor) is a variation of Proof of Work (PoW) and Proof of Stake (PoS) mechanisms in blockchain networks. It rewards miners that contribute valuable machine-learning models and outputs to the network. Miners demonstrate their intelligence by performing machine learning tasks, and validators rank miner outputs based on performance to create a consensus-driven probabilistic truth about intelligence. Unlike Bitcoin consensus, Yuma works with probabilities and can achieve consensus in problems with un-deterministic output.


Bittensor is the first to combine crypto-economic incentives with these AI theories to create "compoundable AI." Their vision is to establish a network of networks, where multiple LLMs collaborate to build a powerful cumulative model capable of competing with centralized models. Since October 2023, Bittensor has achieved early signs of success. They currently host around 32 subnets and have over 5000 models and 1000 validators, including Foundry, Polychain, DCG, and more. These models cover a wide range of applications, such as text prompting (try app.corcel.io) and image generation (studio.bitapai.io). Bittensor offers user interfaces that span from chatboxes to Twitter bots, showcasing different use cases. Notable research achievements include Opentensor's BTLM-3B-8K, which provides accurate AI models on mobile devices surpassing even centralized counterparts, and Subnet 4 utilizing JEPA for a multi-input/output AI architecture that creates a more human-like AI.


Bittensor is positioned to become a central hub for decentralized AI research, development, and deployment. They leverage exponential growth in mining power and robust economic incentives. Ongoing developments include the creation of new subnets with various use cases, such as code-specific AI assistance (similar to GitHub Copilot) and zkML, which highlights the network's diverse developmental landscape. Additionally, Bittensor is experimenting with new architectural advancements like quantization and tokenized model parameterization, demonstrating their dedication to AI research. There are also early indications of DeFI implementation on AI networks, which could lead to the development of unprecedented applications.


Bittensor’s TAO token is the network's native currency for all economic activity. It has a fixed supply of 21 million, similar to Bitcoin, and a fair mining launch with halving every 4 years. Its fully diluted valuation is $6 billion (Dec 8, 2023). Model providers and validators must use $TAO to pay/stake for network services, while AI services users need to pay in $TAO for API access. $TAO can be staked and currently earns ~20% APY.


Bittensor aims to open up AI access, similar to how Bitcoin and Ethereum revolutionized finance. While receiving initial support, Bittensor faces a long journey, much like other crypto projects. Challenges include achieving scalable high-quality inference, user onboarding amid non-crypto app competition, and the effectiveness of compounding LLMs versus centralized ones. Combining crypto and AI isn't easy, just like getting people to use crypto in the past. Bittensor's goal of compounding LLMs holds promise, but it's a big journey ahead to reach its goals.


The intersection of AI and crypto is still in its early stages but holds the potential to reshape the AI landscape through fostering collaboration, innovation, and equitable distribution of benefits. This reduces the increasing risks linked to centralization and enables the development of groundbreaking applications.


To learn more about investment opportunities with Spartan Capital, please contact ir@spartangroup.io