HIVE Digital Technologies operates a fleet of approximately 38,000 commercial-grade NVIDIA GPUs (graphic processing units). HIVE's GPU operations are powered by renewable energy (primarily hydroelectric).
High-end NVIDIA GPUs are in high demand today because they are required to power fast-growing AI technologies such as large language models (LLMs), the tech behind ChatGPT, and text-to-image models such as Stable Diffusion, which powers Midjourney.
Today HIVE puts its GPU fleet to work by:
- GPU as a Service: Renting out bare-metal servers with 5-10 GPUs each on compute marketplaces for users that require HPC and AI computing capabilities
- Building HIVE Cloud, a service offering affordable GPU compute and privacy to small and mid-sized businesses
- Mining Proof-of-Work digital assets, and receiving payment in Bitcoin, that add to the daily production of the Company's ASIC Bitcoin mining operations
The company's GPU fleet includes:
- 4,000+ NVIDIA A40s w/ 48 GB RAM
- 400+ NVIDIA RTX A6000s w/ 48 GB RAM
- 12,000+ NVIDIA RTX A5000s w/ 24 GB RAM
- 20,000+ NVIDIA RTX A4000s w/ 16 GB RAM
HIVE is currently deploying these GPUs in powerful Supermicro servers, which can hold up to 10 NVIDIA A40 or A6000 GPUs each. This gives the new server a robust 480 GB of GPU memory.
GPU Infrastructure for AI & Machine Learning
Today, HIVE's fleet consists of data centre grade NVIDIA GPUs. These machines are built to handle modern AI, HPC, and visual workloads.
The company operates thousands of NVIDIA A40 GPUs which, according to the manufacturer combines “best-in-class professional graphics with powerful compute and AI acceleration to meet today's design, creative, and scientific challenges”.
The NVIDIA A40, RTX A6000, RTX A5000, and RTX A4000 GPUs feature:
- Third-generation Tensor Cores, which according to NVIDIA “provides up to 5X the training throughput over the previous generation to accelerate AI and data science model training without requiring any code changes.“
- PCI Express Gen 4 provides high bandwidth throughput for lightning-fast data transfer
- Virtualization-Ready means multiple GPUs can be utilized for more demanding AI jobs such as training and commercial deployment
HIVE Digital Technologies' subsidiary HIVE Performance Computing Ltd. is currently building HIVE Cloud, an enterprise-grade GPU Cloud service.
HIVE Cloud will offer enterprise grade cloud services for companies that may wish to utilize training or inference on Large Language Models, but require privacy and ownership of their data. Today, publicly available GPT models do not offer the privacy and ownership of data that many corporations require, although they may be interested in utilizing AI compute to optimize their businesses.
The company is currently beta-testing HIVE Cloud with a select group of customers, and aims to launch HIVE Cloud to the public by the end of Q4 2023.
Learn more about HIVE Cloud.
- HIVE operates a 30 MW data centre in Lachute, Quebec and a 70 MW data centre in New Brunswick, with best in class efficiency and stable access to renewable energy at low costs. Each facility has a dedicated team of HIVE technicians with deep expertise maintaining data centres.
- Energy Re-use: HIVE is using heat recapture in Lachute. Our facility heats a 200,000sf factory that manufactures swimming pools, thus saving on energy consumption for the factory during the cold Quebec winters.
- Utilizing excess green/renewable energy and cold climate
- Home to the majority of HIVE's GPU Fleet
- Heat recapture and recyling
- Greenhouse under development, where transferred energy from HIVE's Boden facility will heat a 90,000sf greenhouse growing tomatoes and cucumbers bringing local produce to northern Sweden (estimated completion Q3 2024)
- Providing demand response and frequency response as a service to the local grid operator
- Higher efficiency due to mild year round temperatures results in less energy dedicated to cooling our facilities
- Electricity provided by renewable geothermal and hydro electric power