Only this pageAll pages
Powered by GitBook
1 of 5

Arcane AI

Loading...

Loading...

Loading...

Loading...

Loading...

Introduction

Arcane AI is a framework designed to advance decentralized physical infrastructure networks (DePIN) by promoting privacy-preserving AI technologies. Its primary objective is to incentivize GPU providers, model trainers, AI developers, and data owners to participate in a secure ecosystem. Arcane AI achieves this through the integration of federated learning and zero-knowledge proofs, ensuring that AI model inference and data remain private.

Key features of Arcane AI include:

  • Privacy-preserving AI using cryptographic techniques for zero-leakage model inference.

  • Decentralized AI model training and inference using GPU networks across a global scale.

  • Zero-Knowledge Machine Learning (Arcane AI) for verifiable AI execution without revealing sensitive data.

  • Utilization of Transformer-based Large Language Models (LLMs) for accurate natural language processing.

Zero-Knowledge Proofs

Arcane AI Integration:

  • groth16 : In Arcane AI, we’ve implemented the groth16 zk-SNARK to provide a highly efficient way of securing user inputs(prompt). groth16 is a zero-knowledge proof system that ensures user prompts remain private and tamper-proof throughout the process.

  • Securing User Input: Every time a user interacts with ZkSurfer—whether they’re asking for text or image generation—a zk-proof is automatically generated alongside their prompt. This proof guarantees that the prompt is authentic and has not been altered, all without revealing the actual content of the prompt to the backend or server side.

Generating Proofs with Every Interaction: Each interaction within the chatbot generates a unique zk-proof, ensuring that every exchange is verifiable. The generated proof can be downloaded by user for further verification.
  • Regardless of whether the output is a text or an image, the generated output at ZkSurfer is inseparable from the zk-proof correspondent to the input provided. This linkage keeps the overall process reliable, as any passerby can check that the output corresponds to the verified prompt on which it was produced.

  • The groth16 protocol, which uses the bn128 curve,it is fast to verify these proofs. This ensures that Arcane AI has the possibility to quickly perform many requests while being protected by zk-SNARKs.

  • Privacy: During interactions with users ZkSurfer maintains user privacy. Entered data remains both reliable and secure through the use of the zk-proof and does not reveal any private details.

  • Every transaction on Arcane AI is logged in a correlated zk-proof. This organizes an unchanging and cryptographically safe log of user engagements that can be authenticated without any hindrance. Users and external parties can confirm inputs and outputs as they are unchallenged by the system.

  • Proof Download for Verification: Individuals can download their zk-proofs without digital access. Users maintain and display these proofs with increased safety and control over their interactions. At any instance, users can check whether their signatures and responses are genuine.

  • Getting Started

    Getting Started with Arcane AI

    Welcome to Arcane AI, your solution using advanced AI techniques. With Arcane AI, you can automate setting up your node, sending email and telegram messages on your browser effortlessly. This guide will walk you through the installation and usage of Arcane AI.

    Table of Contents

    • Installing and Running

    • How it Works - The Action Cycle

    • Tech Stack

    • Supported Use Cases

    • Resources

    Installing and Running

    Installing the Extension

    Arcane AI is currently available exclusively through our GitHub repository. Follow these steps to build and install the extension locally on your machine:

    1. Ensure you have Node.js installed, preferably version 16 or later.

    2. Clone the Arcane AI repository from GitHub.

    3. Navigate to the cloned repository directory.

    4. Install the dependencies using Yarn:

      Copy

    Running in Your Browser

    Once the extension is installed, you can access it in two forms:

    • Popup: Press Cmd+Shift+Y (Mac) or Ctrl+Shift+Y (Windows/Linux), or click on the extension logo in your browser.

    • Devtools Panel: Open the browser's developer tools and navigate to the Arcane AI panel.

    Next, you'll need to obtain an API key from Zynapse and paste it into the provided box within the extension. This key will be securely stored in your browser and will not be uploaded to any third-party servers.

    Finally, navigate to the webpage you want Arcane AI to automate actions on (e.g., the OpenAI playground) and start experimenting!

    How it Works - The Action Cycle

    Arcane AI utilizes our custom launched transformer model and Zynapse API to control your browser and perform predefined or ad-hoc instructions. The action cycle involves capturing user instructions, processing them using our transformer model through Zynapse API, and executing the actions on the browser.

    For more details on how to use Arcane AI and its advanced features, refer to our GitHub repository and documentation.

    Tech Stack

    Arcane AI is built using the following technologies:

    • Node.js

    • Chrome Extension API

    • Custom Transformer Model

    • Zynapse API

    Certainly! Here's the updated list with a more concise format:


    Supported Use Cases

    Node Setup Automation:

    • Automated setup process for nodes, catering to both technical and non-technical users.

    • Streamlined resource allocation for optimal node performance.

    • Compatibility with various node configurations and networks.

    Marketing Automation:

    • Telegram scraping for data collection.

    • Automated email outreach with personalized messaging.

    • Bulk distribution capabilities for efficient marketing campaigns.

    • Integration with popular messaging platforms like Telegram for direct messaging automation.

    Leo-Code Generation:

    • Code generation functionality for the Aleo network.

    • Generation of secure and efficient code based on user input.

    • Integration with Aleo development tools for seamless workflow.

    Privacy-Preserving Image and Video Generation:

    • Integration with Zynapse API for privacy-preserving image and video generation.

    • Secure handling of user data and content.

    • Support for various image and video formats and resolutions


    Getting Started with Decentralized GPU Clustering

    Welcome to the world of decentralized GPU clustering, where you can contribute your GPU resources to a network for running heavy ML models in a privacy-preserving manner. This guide will walk you through the steps to set up your environment and start contributing to the network.

    Prerequisites

    Before you begin, ensure you have the following prerequisites installed:

    • Docker

    • Nvidia GPU with CUDA support

    • Python 3.x

    Setup

    1. Installation

    First, clone the decentralized GPU clustering repository:

    Then, install the required Python packages:

    Copy

    Copy

    2. Configuration

    Navigate to the config directory and edit the config.yaml file to configure your settings. You can specify your Ethereum wallet address for receiving rewards.

    3. Running the Dashboard

    To access the dashboard and monitor GPU utilization, run the following command:

    Copy

    Copy

    This will start the dashboard server. You can access the dashboard by opening your web browser and navigating to http://localhost:8080.

    4. Contributing GPU Resources

    To contribute your GPU resources to the network, run the following command:

    Copy

    Copy

    This will start your GPU node and connect it to the decentralized clustering network. Your GPU will now be available for running ML models.

    Privacy-Preserved Computing

    Our decentralized GPU clustering system utilizes zero-knowledge proofs (zkproofs) to ensure that computations are performed in a privacy-preserving manner. This means that while your GPU is contributing to the network and running ML models, your data and computations remain private and secure.

    Fractional Computing

    In addition to running full ML models, our system also supports fractional computing, allowing you to contribute fractional GPU resources to the network. This enables efficient utilization of GPU resources and maximizes the network's computational power.

    Integration and Capablities

    • Scalable libraries for common machine learning tasks such as data preprocessing, distributed training, hyperparameter tuning, reinforcement learning, and model serving.

    • Pythonic distributed computing primitives for parallelizing and scaling Python applications.

    • Integrations and utilities for integrating and deploying a cluster with existing tools and infrastructure such as Kubernetes, AWS, GCP, and Azure.

    For data scientists and machine learning practitioners, it lets you scale jobs without needing infrastructure expertise:

    • Easily parallelize and distribute ML workloads across multiple nodes and GPUs.

    • Leverage the ML ecosystem with native and extensible integrations.

    For ML platform builders and ML engineers:

    • Provides compute abstractions for creating a scalable and robust ML platform.

    • Provides a unified ML API that simplifies onboarding and integration with the broader ML ecosystem.

    • Reduces friction between development and production by enabling the same Python code to scale seamlessly from a laptop to a large cluster.

    For distributed systems engineers, it automatically handles key processes:

    • Orchestration–Managing the various components of a distributed system.

    • Scheduling–Coordinating when and where tasks are executed.

    • Fault tolerance–Ensuring tasks complete regardless of inevitable points of failure.

    • Auto-scaling–Adjusting the number of resources allocated to dynamic demand.

    Conclusion

    Congratulations! You have successfully set up your environment and started contributing to the decentralized GPU clustering network. You can now monitor GPU utilization, run heavy ML models, and contribute to the network's computational power in a privacy-preserving manner.

    For more information and advanced usage, refer to the documentation or reach out to our community for support.

    Happy computing!

    Copy
  • Build the package:

    Copy

    Copy

  • Load the extension in Chrome:

    • Navigate to chrome://extensions/ in your Chrome browser.

    • Enable Developer mode.

    • Click on "Load unpacked extension" and select the build folder generated by yarn build.

  • yarn build
    pip install -r requirements.txt
    python dashboard.py
    python contribute.py
    yarn install

    FAQs

    How to get Telegram API ID and Hash?

    Before using Telegram functions you need to get your own API ID and hash:

    1. Login with your Telegram Account with a phone number that will be used for Sending DM and Scraping group members.

    2. Click under API Development tools.

    3. A Create new application window will appear. Fill in your application details. There is no need to enter any URL, and only the first two fields (App title and Short name) can currently be changed later.

    4. Click on Create application at the end. Remember that your API hash is secret and Telegram won’t let you revoke it. Don’t post it anywhere!

    Tokenomics

    Total Supply : 1B

    Liquidity Pool : LP Burned

    Tax : 0% Buy/Sell Tax

    Ownership : Ownership Renounced