Skip to content
tau.how

Spore Drive

Spore Drive is an automation tool tailor-made for deploying and scaling Taubyte networks. Inspired by the spore drive technology in Star Trek’s USS Discovery, our Spore Drive plots its course across a network built using a library we developed called mycelium. The mycelium library uses SSH to establish connections across the network, mirroring the interconnectedness of a mycelium fungal network in nature.

Spore Drive was designed to be agent-less and to operate with minimal prerequisites - all it needs is SSH access to the host. This simplicity and specific focus on Taubyte networks mean that deploying and managing your Taubyte network does not require learning another tool or expertise in a broader platform. It’s all about democratizing Cloud Computing!

Here are the key aspects of Spore Drive:

  1. Deployment: Spore Drive simplifies the process of deploying nodes on your network. It consumes a configuration file and effectively “plants” nodes across your mycelium network.

  2. Scaling: While Spore Drive doesn’t scale the network itself, it aids in deploying additional nodes and injects them with the necessary information to seamlessly integrate into the existing network. This greatly simplifies the process of expanding your network as demand grows.

  3. NoOps: The ultimate goal of Spore Drive is to bring your cloud network management as close to a NoOps model as possible. This means minimizing the need for human intervention and operational tasks.

However, it’s important to note that Spore Drive does not offer network monitoring. For monitoring your Taubyte network, we’ve developed another specialized tool known as Q, which will be covered in detail in another section.

By streamlining the deployment and management of your Taubyte network, Spore Drive lets you focus on what matters most - developing and delivering your applications. Let’s democratize Cloud Computing for real!

Configuration

Though Spore Drive will create and update these configuration files for you, it’s invaluable to understand the structure and syntax of these YAML files. Each configuration employs a set of key components playing distinct roles in the deployment of your cloud network.

Components

Before we delve deeper into the use of these components in a Spore Drive configuration, let’s establish an understanding of the foundational elements. This will help you harness the full potential of Spore Drive as it shapes these components into a cohesive and efficient cloud network. Below, we’ve provided a table outlining these pivotal components:

ComponentDescription
NetworkRefers to the Cloud itself in Taubyte terminology.
HostRepresents a machine in the network, could be physical or virtual.
KeyA combination of username and SSH key for access.
ShapeDefines what protocols and plugins tau should run, along with their configuration.
NodeAn instance of tau, not explicitly spelled out as ‘node’ in config but rather is the combination of a host and a shape.
tauThe main binary to run on a host to instantiate a Node.
PluginAn extension to tau that adds features or implements interfaces.

Structure & Location

Configuration files used by Spore Drive are stored in the ~/.taubyte/networks/ directory, where ~ represents your home folder. Each network possesses its own dedicated folder within this directory.

A standard configuration folder is structured as follows:

├── hosts.yaml
├── keys
│   ├── dv_private.key
│   ├── dv_public.pem
│   ├── ssh.pem
│   └── swarm.key
├── keys.yaml
├── network.yaml
└── shapes.yaml

However, if any file becomes substantial in size or is expected to grow significantly, it can be stored in a dedicated folder. Here’s how the configuration would look in that case:

├── hosts
│   ├── host1.yaml
│   ├── host2.yaml
│   ├── ...
│   └── hostN.yaml
├── keys
│   ├── dv_private.key
│   ├── dv_public.pem
│   ├── ssh.pem
│   └── swarm.key
├── keys.yaml
├── network.yaml
└── shapes.yaml

With this arrangement, Spore Drive provides you with the flexibility to maintain your configurations even as they evolve and expand over time.

go-seer

The configuration files can be reorganized and expanded without affecting Spore Drive’s ability to update them, and your added comments will be preserved. This flexibility is made possible thanks to our open-source super YAML parser, go-seer.

Drive into the yaml

In this section, we will delve into the YAML configuration files that Spore Drive utilizes. Familiarizing yourself with these files will not only give you a deeper understanding of your Taubyte network, but also allow you to fine-tune your setup, should the need arise.

Network

Our journey begins with the network.yaml file, the cornerstone of your Taubyte-based Cloud network configuration.

domain:
    root: example.cloud
    generated: g.example.cloud
    validation:
        key:
            private: keys/dv_private.key
            public: keys/dv_public.pem
p2p:
    bootstrap:
        elders:
            - host1
            - host2
            - host3
    swarm:
        key: keys/swarm.key

At the heart of a Taubyte-based Cloud is its Fully Qualified Domain Name (FQDN). The domain section serves the crucial purpose of defining this FQDN. Specifically:

  • root specifies the FQDN for the Cloud.
  • generated outlines an FQDN used for generated domains, like when a user wants to create an HTTP function without explicitly binding it to their own domain.
  • validation offers a pair of keys within the key subsection. These keys are essential for validating the ownership of user-defined domains. Note that the private key is exclusively used when deploying the auth protocol. If you’re adding members to the network who won’t be running auth, you can omit providing them with this key. We are also working on introducing a threshold signature schema, which will facilitate a trustless distribution of the key.

On the peer-to-peer side, a few important elements shape the network:

  • bootstrap lists the nodes used for bootstraping. In our example, all nodes deployed on host1, host2, and host3 using the elders shape serve this role.
  • swarm allows you to secure and isolate your network from others. This feature will soon be enhanced with a new protocol we’re developing, called Proof-of-Integrity, adding an extra layer of security.

Keys

As we’ve established earlier, Spore Drive is an agentless tool that relies on the SSH protocol to connect to hosts within the network. This requires SSH keys, which are commonly shared across many hosts for a specific user. To streamline their management, we’ve devised a simple approach encapsulated within the keys.yaml file.

key1:
    user: samy
    files:
        - keys/ssh.pem

Here’s a breakdown of the fields:

  • key1 is the identifier for this particular key configuration.
  • user specifies the username to employ when utilizing this key.
  • files enumerates a list of SSH keys. This facilitates the grouping of slightly heterogeneous keys.

Folder Schema

Our flexible folder schema allows for each key to be stored as keys/<key-name>.yaml, thereby accommodating larger configurations.

Shapes

Shapes act as high-level configuration templates for nodes in the network. Spore Drive utilizes these shapes, along with other configuration parameters defined for the host and network, to generate a node-specific configuration that an instance of tau can consume.

The shapes.yaml file outlines the roles and configurations that each node can embody within the network.

all:
    protocols:
        - auth
        - tns
        - node
        - seer
    ports:
        main:
            p2p: 4224
        lite:
            ipfs: 4288
            p2p: 4244
    plugins:
        - nvida
        - delta

elders:
    ports:
        main:
            p2p: 4242

In this file, two shapes are defined: all and elders. The elders shape is an example of what we call an epsilon shape, generally used to create bootstrap and/or relay nodes.

Outside epsilon shapes, other shape definitions include:

  • protocols: a list of protocols that should be enabled for the node.
  • ports: defines the ports to be used by the node. If set correctly, multiple shapes/nodes can coexist on the same host. However, there is a caveat: protocols with RESTful endpoints default to port 443, thus only allowing a single instance of that shape on a host. This limitation will be addressed in the future with the introduction of a gateway node.
  • plugins: a list of the plugins to be loaded by the instance using this shape.

Hosts & Nodes

In the context of a Taubyte network, hosts are machines that can run one or more nodes (instances of tau). Conceptually, a node is a specific shape operating on a particular host. Hence, there is no explicit way to define node names.

Let’s take a closer look at the hosts.yaml file.

host:
    addr:
        - 1.2.3.4/32
    ssh:
        addr: 1.2.3.4
        port: 22
        key: gcp
    gps:
        - "22"
        - "33"
    shapes:
        all:
            key: CAESQBPLN....F6qvtguXAY=
            id: 12D3KooWRGorYi54VebyxZwrSwGPtj741VJwVMtmPd8tRxRqJWsw
        elders:
            key: CAESQFurz....JxoasqSKDF=
            id: 12D3KooWQb8SeJxhyAS3SXHGURG2K5zYE5NtnFMUh7qkkMamCAed
  • host1 is the name of the host. This doesn’t necessarily have to match the machine’s hostname.
  • addr includes a list of all the host’s IP addresses, noted in CIDR format to allow for optimal bootstrapping and fine-tuning of other parameters.
  • ssh details the method to connect to the host. The addr and port fields are quite standard, while the key field refers to the previously mentioned SSH key, supplying the username and a list of valid SSH keys.
  • shapes (which can be conceptually thought of as nodes, but we maintain the term shapes for ease of understanding) specify every shape that will run on this host.
    • all & elders here correspond to the shape names defined in the shapes.yaml file.
    • key represents the private key of the host. If it’s omitted, the node will generate a private key and share only the id, which will then be filled out by Spore Drive in the file.
    • id is the node’s unique identifier.

The shapes section creates a clear connection between the host and its associated nodes, providing an intuitive framework for deploying and managing different node configurations on the host.