Tl;dr: unstructured streaming ETL, new hardware architectures, middleware to power LLM-based app developers, workflow execution, and generative content
> At the hardware layer, we are seeing a transition from CPUs to GPUs to TPUs, which are more efficient for training massive (30B+ parameter) models.
Why isn’t this a main investment opportunity? ie Google, Amazon who are leading the pack with TPUs
> At the hardware layer, we are seeing a transition from CPUs to GPUs to TPUs, which are more efficient for training massive (30B+ parameter) models.
Why isn’t this a main investment opportunity? ie Google, Amazon who are leading the pack with TPUs