Nodes

Teneo's architecture is designed to support both DePIN and AI ecosystems through a unified node approach, which consolidates all functionalities into a single, efficient, secure, and robust network management node type.

Node Security and Operation

Each Teneo Node operates as a sealed encrypted virtual machine on an independently managed server. This "sealed" status ensures that neither the node operator nor any other party can access the node's processor interior.

Threshold Keys and Distributed Key Generation

Through DKG, each Teneo node participates in creating public/private key pairs without any party holding the entire key. Nodes hold key shares used for signing and decrypting data.

Consensus Mechanism (PoS)

  • Staking: Node operators use the $TENEO token to meet staking requirements and receive rewards for their services. $TENEO is also used to pay for transactions on the Data Layer.

  • Network Consensus: Operations require participation from two-thirds of network nodes to be executed.

  • Key Distribution: No single node or client ever gains access to the complete private keys.

  • Curve Flexibility: The protocol supports various cryptographic curves and signature schemes, enabling broad interoperability.

Sealed and Confidential Hardware

Teneo node operators use AMD’s SEV-SNP for a bare metal installation, ensuring they never access key shares directly or the computation within each node.

  • Trusted Execution Environment (TEE): SEV-SNP provides advanced hardware-level isolation for network operations.

  • Code Immutability and Confidentiality: Programs within the TEE remain immutable and private, ensuring consistent operational integrity.

Execution Environment

Each Teneo node includes an execution environment that allows developers to write immutable programs, called Teneo Actions, governing, signing, running computing tasks and encryption operations.

Distributed Computing Power and Storage

Teneo's nodes provide distributed computing power and storage, managing extensive data pipelines or training custom large language models. This is particularly beneficial for small and medium-sized businesses, offering access to advanced applications without the need for significant technological infrastructure.

Last updated