Computational Storage for High-Throughput Storage Systems
MAR 17, 20269 MIN READ
Generate Your Research Report Instantly with AI Agent
Patsnap Eureka helps you evaluate technical feasibility & market potential.
Computational Storage Background and Objectives
Computational storage represents a paradigm shift in data processing architecture, emerging from the fundamental limitations of traditional storage systems where data must be moved between storage devices and processing units. This approach integrates processing capabilities directly into storage devices, enabling data to be processed where it resides rather than requiring costly data movement across system interconnects.
The evolution of computational storage stems from the exponential growth in data generation and the increasing performance gap between storage bandwidth and computational requirements. Traditional architectures face significant bottlenecks when processing large datasets, as data transfer overhead often dominates total processing time. The concept gained momentum with the advent of solid-state drives and programmable hardware, which provided the foundation for embedding computational resources within storage devices.
High-throughput storage systems particularly benefit from computational storage due to their inherent data-intensive nature. These systems typically handle massive volumes of data requiring real-time or near-real-time processing, making data movement costs prohibitively expensive. Applications such as big data analytics, artificial intelligence workloads, database operations, and scientific computing represent primary use cases where computational storage delivers substantial performance improvements.
The primary objective of implementing computational storage in high-throughput environments is to minimize data movement while maximizing processing efficiency. By performing computations at the storage layer, systems can achieve significant reductions in latency, power consumption, and network bandwidth utilization. This approach enables more efficient utilization of system resources and improved overall throughput.
Key technical objectives include developing standardized interfaces for computational storage devices, optimizing workload distribution between host processors and storage-embedded computing units, and ensuring seamless integration with existing storage infrastructures. The technology aims to provide transparent acceleration for data-intensive operations while maintaining compatibility with current software stacks and storage protocols.
The strategic goal extends beyond performance improvements to encompass cost reduction and energy efficiency. Computational storage seeks to eliminate unnecessary data transfers, reduce server CPU utilization for routine data processing tasks, and enable more scalable system architectures that can handle growing data volumes without proportional increases in infrastructure complexity.
The evolution of computational storage stems from the exponential growth in data generation and the increasing performance gap between storage bandwidth and computational requirements. Traditional architectures face significant bottlenecks when processing large datasets, as data transfer overhead often dominates total processing time. The concept gained momentum with the advent of solid-state drives and programmable hardware, which provided the foundation for embedding computational resources within storage devices.
High-throughput storage systems particularly benefit from computational storage due to their inherent data-intensive nature. These systems typically handle massive volumes of data requiring real-time or near-real-time processing, making data movement costs prohibitively expensive. Applications such as big data analytics, artificial intelligence workloads, database operations, and scientific computing represent primary use cases where computational storage delivers substantial performance improvements.
The primary objective of implementing computational storage in high-throughput environments is to minimize data movement while maximizing processing efficiency. By performing computations at the storage layer, systems can achieve significant reductions in latency, power consumption, and network bandwidth utilization. This approach enables more efficient utilization of system resources and improved overall throughput.
Key technical objectives include developing standardized interfaces for computational storage devices, optimizing workload distribution between host processors and storage-embedded computing units, and ensuring seamless integration with existing storage infrastructures. The technology aims to provide transparent acceleration for data-intensive operations while maintaining compatibility with current software stacks and storage protocols.
The strategic goal extends beyond performance improvements to encompass cost reduction and energy efficiency. Computational storage seeks to eliminate unnecessary data transfers, reduce server CPU utilization for routine data processing tasks, and enable more scalable system architectures that can handle growing data volumes without proportional increases in infrastructure complexity.
Market Demand for High-Throughput Storage Solutions
The global demand for high-throughput storage solutions has experienced unprecedented growth driven by the exponential increase in data generation across multiple industries. Enterprise data centers, cloud service providers, and hyperscale computing environments are generating massive volumes of data that require immediate processing and storage capabilities far beyond traditional storage architectures.
Financial services organizations processing real-time trading data, telecommunications companies handling network analytics, and media companies managing high-resolution content streams represent key market segments driving this demand. These sectors require storage systems capable of handling sustained high-bandwidth workloads while maintaining low latency for critical applications.
The emergence of artificial intelligence and machine learning workloads has fundamentally transformed storage requirements. Training large language models, computer vision applications, and deep learning algorithms demand storage systems that can deliver consistent high throughput for both sequential and random access patterns. Traditional storage architectures struggle to meet these performance requirements without significant infrastructure investments.
Edge computing deployments have created additional market pressure for high-throughput storage solutions. Autonomous vehicles, industrial IoT sensors, and smart city infrastructure generate continuous data streams that require local processing and storage capabilities. These applications demand storage systems that can handle high ingestion rates while providing real-time analytics capabilities.
Scientific computing and research institutions represent another significant market segment. High-energy physics experiments, genomic sequencing, climate modeling, and astronomical observations generate petabytes of data requiring immediate analysis. These applications drive demand for storage systems that can sustain extreme throughput levels while supporting complex computational workflows.
The shift toward software-defined infrastructure and containerized applications has intensified performance requirements. Modern applications expect storage systems to provide consistent performance regardless of workload characteristics or concurrent access patterns. This expectation has created market opportunities for innovative storage architectures that can adapt to dynamic workload requirements.
Market research indicates strong growth trajectories for high-throughput storage solutions across vertical markets. Organizations are increasingly prioritizing storage performance as a competitive differentiator, recognizing that data processing speed directly impacts business outcomes and operational efficiency.
Financial services organizations processing real-time trading data, telecommunications companies handling network analytics, and media companies managing high-resolution content streams represent key market segments driving this demand. These sectors require storage systems capable of handling sustained high-bandwidth workloads while maintaining low latency for critical applications.
The emergence of artificial intelligence and machine learning workloads has fundamentally transformed storage requirements. Training large language models, computer vision applications, and deep learning algorithms demand storage systems that can deliver consistent high throughput for both sequential and random access patterns. Traditional storage architectures struggle to meet these performance requirements without significant infrastructure investments.
Edge computing deployments have created additional market pressure for high-throughput storage solutions. Autonomous vehicles, industrial IoT sensors, and smart city infrastructure generate continuous data streams that require local processing and storage capabilities. These applications demand storage systems that can handle high ingestion rates while providing real-time analytics capabilities.
Scientific computing and research institutions represent another significant market segment. High-energy physics experiments, genomic sequencing, climate modeling, and astronomical observations generate petabytes of data requiring immediate analysis. These applications drive demand for storage systems that can sustain extreme throughput levels while supporting complex computational workflows.
The shift toward software-defined infrastructure and containerized applications has intensified performance requirements. Modern applications expect storage systems to provide consistent performance regardless of workload characteristics or concurrent access patterns. This expectation has created market opportunities for innovative storage architectures that can adapt to dynamic workload requirements.
Market research indicates strong growth trajectories for high-throughput storage solutions across vertical markets. Organizations are increasingly prioritizing storage performance as a competitive differentiator, recognizing that data processing speed directly impacts business outcomes and operational efficiency.
Current State and Challenges of Computational Storage
Computational storage technology has emerged as a promising solution to address the growing performance bottlenecks in traditional storage architectures. Currently, the field is experiencing rapid development with multiple implementation approaches being explored simultaneously. Near-data computing architectures are being integrated directly into storage devices, including SSDs, HDDs, and storage controllers, enabling data processing capabilities at the storage layer rather than requiring data movement to separate compute resources.
The current technological landscape is dominated by several key approaches. Storage-class memory technologies such as 3D NAND, emerging non-volatile memories, and persistent memory are being enhanced with embedded processing units. Major storage vendors are developing computational SSDs that incorporate ARM processors, FPGAs, or specialized accelerators directly within the drive enclosure. These solutions aim to offload specific computational tasks from the host CPU while maintaining compatibility with existing storage interfaces.
However, significant technical challenges persist in achieving widespread adoption. Power consumption remains a critical constraint, as computational storage devices must balance processing capabilities with thermal design limits typical of storage form factors. The integration of compute resources within storage devices introduces complex power management requirements that traditional storage systems were not designed to handle.
Programming model standardization presents another substantial challenge. The lack of unified APIs and development frameworks makes it difficult for software developers to effectively utilize computational storage capabilities across different vendor implementations. Current solutions often require vendor-specific programming approaches, limiting portability and increasing development complexity.
Performance optimization challenges are multifaceted, involving the coordination between storage media access patterns, computational workload scheduling, and data movement minimization. Achieving optimal performance requires sophisticated algorithms that can dynamically balance computational tasks with storage I/O operations while maintaining quality of service guarantees for both functions.
Reliability and fault tolerance mechanisms designed for traditional storage systems require fundamental redesign to accommodate computational components. The integration of processing elements introduces new failure modes and necessitates enhanced error detection and correction capabilities that extend beyond traditional storage reliability mechanisms.
Geographically, computational storage development is concentrated in regions with strong semiconductor and storage industries. North America leads in research and early-stage product development, while Asia-Pacific regions, particularly South Korea, Japan, and Taiwan, dominate manufacturing capabilities and supply chain integration.
The current technological landscape is dominated by several key approaches. Storage-class memory technologies such as 3D NAND, emerging non-volatile memories, and persistent memory are being enhanced with embedded processing units. Major storage vendors are developing computational SSDs that incorporate ARM processors, FPGAs, or specialized accelerators directly within the drive enclosure. These solutions aim to offload specific computational tasks from the host CPU while maintaining compatibility with existing storage interfaces.
However, significant technical challenges persist in achieving widespread adoption. Power consumption remains a critical constraint, as computational storage devices must balance processing capabilities with thermal design limits typical of storage form factors. The integration of compute resources within storage devices introduces complex power management requirements that traditional storage systems were not designed to handle.
Programming model standardization presents another substantial challenge. The lack of unified APIs and development frameworks makes it difficult for software developers to effectively utilize computational storage capabilities across different vendor implementations. Current solutions often require vendor-specific programming approaches, limiting portability and increasing development complexity.
Performance optimization challenges are multifaceted, involving the coordination between storage media access patterns, computational workload scheduling, and data movement minimization. Achieving optimal performance requires sophisticated algorithms that can dynamically balance computational tasks with storage I/O operations while maintaining quality of service guarantees for both functions.
Reliability and fault tolerance mechanisms designed for traditional storage systems require fundamental redesign to accommodate computational components. The integration of processing elements introduces new failure modes and necessitates enhanced error detection and correction capabilities that extend beyond traditional storage reliability mechanisms.
Geographically, computational storage development is concentrated in regions with strong semiconductor and storage industries. North America leads in research and early-stage product development, while Asia-Pacific regions, particularly South Korea, Japan, and Taiwan, dominate manufacturing capabilities and supply chain integration.
Existing High-Throughput Storage Architectures
01 Computational storage device architecture with integrated processing
Computational storage devices integrate processing capabilities directly into storage systems to perform data operations locally. This architecture reduces data movement between storage and host processors, thereby improving overall throughput. The computational storage device includes processors, memory controllers, and storage media working in coordination to execute computational tasks on stored data. By offloading computational workloads from the host system to the storage device, the system achieves higher data processing rates and reduced latency.- Computational storage device architecture with integrated processing: Computational storage devices integrate processing capabilities directly into storage systems to perform data operations locally. This architecture reduces data movement between storage and host processors, thereby improving overall throughput. The computational storage device includes processors, memory controllers, and storage media working in coordination to execute computational tasks on stored data. This approach minimizes latency and maximizes bandwidth utilization by processing data where it resides.
- Parallel processing and multi-threading for enhanced throughput: Implementing parallel processing techniques and multi-threading mechanisms in computational storage systems significantly increases data processing throughput. Multiple processing units operate concurrently on different data streams or tasks, enabling simultaneous execution of computational operations. This parallelization strategy optimizes resource utilization and reduces overall processing time. Advanced scheduling algorithms coordinate the parallel operations to maintain data consistency while maximizing performance.
- Data path optimization and buffer management: Optimizing data paths and implementing efficient buffer management strategies enhance computational storage throughput. Techniques include intelligent data caching, prefetching mechanisms, and streamlined data transfer protocols between storage components. Buffer allocation and management algorithms ensure optimal data flow and minimize bottlenecks. These optimizations reduce access latency and improve sustained data transfer rates throughout the computational storage system.
- Command queuing and scheduling mechanisms: Advanced command queuing and scheduling mechanisms improve computational storage throughput by efficiently managing multiple concurrent operations. Priority-based scheduling algorithms optimize the execution order of computational and storage tasks. Queue management techniques handle multiple command streams while maintaining quality of service requirements. These mechanisms enable better resource allocation and reduce idle time, resulting in higher overall system throughput.
- Interface protocols and bandwidth optimization: Optimizing interface protocols and bandwidth utilization enhances data transfer rates in computational storage systems. Advanced interface standards and protocols enable high-speed communication between computational storage devices and host systems. Techniques include protocol optimization, error correction mechanisms, and efficient encoding schemes. Bandwidth management strategies ensure maximum utilization of available communication channels while maintaining data integrity and reducing overhead.
02 Data path optimization and parallel processing techniques
Optimizing data paths within computational storage systems enables parallel processing of multiple data streams simultaneously. This approach utilizes multiple processing units and memory channels to handle concurrent operations, significantly increasing throughput. The system implements advanced scheduling algorithms and resource allocation mechanisms to maximize utilization of available processing resources. Queue management and pipeline architectures ensure continuous data flow without bottlenecks, allowing the storage system to maintain high performance under heavy workloads.Expand Specific Solutions03 Memory and cache management for enhanced performance
Efficient memory and cache management strategies are critical for maximizing computational storage throughput. The system employs multi-level caching hierarchies and intelligent prefetching mechanisms to reduce access latency. Buffer management techniques ensure optimal data staging between different storage tiers. Memory allocation algorithms dynamically adjust resources based on workload characteristics, preventing resource contention and maintaining consistent performance levels across various operational scenarios.Expand Specific Solutions04 Interface protocols and communication optimization
Advanced interface protocols and communication mechanisms enable high-speed data transfer between computational storage devices and host systems. The implementation includes optimized command queuing, reduced protocol overhead, and efficient error handling mechanisms. Communication channels are designed to support high bandwidth requirements while maintaining low latency. The system supports multiple interface standards and implements adaptive mechanisms to optimize throughput based on connection characteristics and workload patterns.Expand Specific Solutions05 Workload-aware resource allocation and scheduling
Intelligent workload analysis and resource allocation mechanisms optimize computational storage throughput by adapting to different application requirements. The system monitors workload characteristics in real-time and dynamically adjusts processing resources, memory allocation, and data placement strategies. Scheduling algorithms prioritize tasks based on performance objectives and resource availability. This adaptive approach ensures efficient utilization of computational and storage resources while maintaining quality of service guarantees for different workload types.Expand Specific Solutions
Key Players in Computational Storage Industry
The computational storage market for high-throughput systems is experiencing rapid evolution, transitioning from an emerging technology phase to early commercial adoption. The market demonstrates substantial growth potential, driven by increasing data-intensive workloads and the need for processing efficiency at the storage layer. Technology maturity varies significantly across market participants, with established semiconductor giants like Samsung Electronics, Micron Technology, Intel, and SK hynix leading in foundational storage technologies and memory solutions. Traditional enterprise players including IBM, Hitachi, and Huawei Technologies bring mature system integration capabilities, while specialized companies like Eidetic Communications and DataDirect Networks focus on computational storage innovations. Cloud hyperscalers such as Google are developing proprietary solutions for internal deployment. The competitive landscape shows a convergence of memory manufacturers, storage system vendors, and technology integrators, indicating the technology's progression toward mainstream enterprise adoption despite remaining challenges in standardization and ecosystem maturity.
Samsung Electronics Co., Ltd.
Technical Solution: Samsung has developed SmartSSD computational storage solutions that integrate ARM-based processors directly into NVMe SSDs, enabling in-storage processing capabilities. Their technology allows data processing to occur at the storage layer, reducing data movement between storage and compute resources. The SmartSSD platform supports various workloads including database acceleration, analytics, and machine learning inference. Samsung's approach focuses on offloading specific computational tasks to the storage device, achieving significant performance improvements in data-intensive applications. The solution provides APIs and development frameworks that allow applications to leverage near-data computing capabilities, particularly beneficial for big data analytics and real-time processing scenarios where traditional storage architectures create bottlenecks.
Strengths: Market-leading NAND flash technology, established enterprise relationships, comprehensive development ecosystem. Weaknesses: Limited computational power compared to dedicated processors, dependency on specific workload optimization.
International Business Machines Corp.
Technical Solution: IBM's computational storage approach leverages their expertise in enterprise storage systems and AI acceleration through their Storage Insights and Spectrum Storage portfolio. They have developed solutions that integrate computational capabilities directly into storage controllers and devices, enabling near-data processing for enterprise workloads. IBM's technology focuses on hybrid cloud environments where data processing can be distributed between traditional compute resources and storage-embedded processors. Their platform supports advanced data management functions including automated tiering, predictive analytics, and real-time data transformation. IBM emphasizes enterprise-grade reliability and security in their computational storage solutions, targeting large-scale data center deployments where reducing data movement overhead is critical for achieving optimal performance in analytics and AI workloads.
Strengths: Enterprise-grade reliability and security features, strong AI and analytics software integration, established enterprise customer base. Weaknesses: Higher cost compared to commodity solutions, complex deployment and management requirements.
Core Innovations in Near-Data Computing
Computational Storage Systems and Methods
PatentActiveUS20220057959A1
Innovation
- The implementation of a 3-dimensional versatile processing array (3D-VPA) within SSD controllers, which allows for dynamic reconfiguration and simultaneous processing of NVMe and vendor unique commands, leveraging FPGA flexibility and CPU extension instructions to handle in-situ processing tasks efficiently.
Storage system, computational storage processor and solid-state drive thereof, and data reading method and data writing method therefor
PatentPendingEP4375842A1
Innovation
- Implementing a point-to-point communication protocol using the PCIe bus between the Solid-State Drive (SSD) and CSP, where the CSP generates operation instructions based on flash memory addresses and SSD resource information, reducing data flow through the CSP by only transmitting instructions and allowing direct data exchange between the SSD and external entities.
Performance Benchmarking and Standards
Performance benchmarking for computational storage in high-throughput storage systems requires standardized methodologies to accurately assess system capabilities and ensure fair comparisons across different implementations. Current benchmarking approaches must account for the unique characteristics of computational storage devices, which combine traditional storage metrics with computational performance indicators.
The Storage Networking Industry Association (SNIA) has established foundational benchmarking frameworks that are being adapted for computational storage environments. These frameworks emphasize the importance of measuring both storage throughput and computational efficiency simultaneously, as traditional storage benchmarks fail to capture the dual nature of computational storage workloads. Key performance indicators include data processing rates, latency reduction compared to host-based processing, and energy efficiency metrics.
Industry-standard benchmarking tools are evolving to accommodate computational storage architectures. Tools like FIO (Flexible I/O Tester) are being extended with computational workload generators that can simulate real-world scenarios such as database query acceleration, compression operations, and machine learning inference tasks. These enhanced benchmarking suites provide comprehensive performance profiles that reflect actual deployment conditions.
Standardization efforts focus on establishing consistent measurement methodologies across different computational storage implementations. The emerging standards define specific test scenarios, workload characteristics, and reporting formats to ensure reproducible results. These standards address critical aspects such as queue depth optimization, mixed workload scenarios, and thermal throttling effects on performance.
Performance validation protocols are being developed to certify computational storage devices for specific use cases. These protocols establish minimum performance thresholds and consistency requirements that devices must meet to qualify for high-throughput storage deployments. The certification process includes stress testing under sustained high-load conditions and verification of performance claims under various operational scenarios.
Cross-platform compatibility standards ensure that computational storage devices can be effectively benchmarked regardless of the host system architecture or software stack, promoting broader adoption and integration flexibility.
The Storage Networking Industry Association (SNIA) has established foundational benchmarking frameworks that are being adapted for computational storage environments. These frameworks emphasize the importance of measuring both storage throughput and computational efficiency simultaneously, as traditional storage benchmarks fail to capture the dual nature of computational storage workloads. Key performance indicators include data processing rates, latency reduction compared to host-based processing, and energy efficiency metrics.
Industry-standard benchmarking tools are evolving to accommodate computational storage architectures. Tools like FIO (Flexible I/O Tester) are being extended with computational workload generators that can simulate real-world scenarios such as database query acceleration, compression operations, and machine learning inference tasks. These enhanced benchmarking suites provide comprehensive performance profiles that reflect actual deployment conditions.
Standardization efforts focus on establishing consistent measurement methodologies across different computational storage implementations. The emerging standards define specific test scenarios, workload characteristics, and reporting formats to ensure reproducible results. These standards address critical aspects such as queue depth optimization, mixed workload scenarios, and thermal throttling effects on performance.
Performance validation protocols are being developed to certify computational storage devices for specific use cases. These protocols establish minimum performance thresholds and consistency requirements that devices must meet to qualify for high-throughput storage deployments. The certification process includes stress testing under sustained high-load conditions and verification of performance claims under various operational scenarios.
Cross-platform compatibility standards ensure that computational storage devices can be effectively benchmarked regardless of the host system architecture or software stack, promoting broader adoption and integration flexibility.
Energy Efficiency in Computational Storage
Energy efficiency has emerged as a critical design consideration in computational storage systems, particularly as data centers face mounting pressure to reduce operational costs and environmental impact. Traditional storage architectures that separate compute and storage resources often result in significant energy overhead due to data movement across interconnects, making energy optimization a paramount concern for high-throughput storage deployments.
The primary energy consumption in computational storage systems stems from three main sources: processing units performing near-data computations, memory subsystems maintaining data locality, and interconnect infrastructure facilitating data transfers. Processing-in-storage architectures demonstrate substantial energy savings by eliminating the need to move large datasets between storage devices and remote compute nodes, with studies indicating energy reductions of 30-60% compared to conventional approaches.
Modern computational storage devices leverage specialized processing units optimized for energy efficiency, including ARM-based processors, FPGA accelerators, and custom ASICs designed for specific workloads. These processing elements typically operate at lower clock frequencies and voltages compared to general-purpose CPUs, achieving better performance-per-watt ratios while maintaining sufficient computational capability for storage-centric operations.
Dynamic power management techniques play a crucial role in optimizing energy consumption across varying workload patterns. Advanced computational storage systems implement sophisticated power scaling mechanisms that adjust processor frequencies, memory refresh rates, and interconnect speeds based on real-time demand. These adaptive approaches can reduce idle power consumption by up to 70% during low-activity periods while maintaining rapid response capabilities.
Thermal management represents another significant aspect of energy efficiency, as computational storage devices must dissipate heat generated by both storage media and processing units within constrained form factors. Innovative cooling solutions, including advanced heat spreaders and intelligent thermal throttling algorithms, help maintain optimal operating temperatures while minimizing energy overhead from cooling systems.
The integration of emerging non-volatile memory technologies, such as 3D NAND and storage-class memory, further enhances energy efficiency by reducing write amplification and enabling more efficient data placement strategies. These technologies, combined with intelligent data management algorithms, contribute to overall system energy optimization while maintaining high-throughput performance characteristics essential for demanding storage applications.
The primary energy consumption in computational storage systems stems from three main sources: processing units performing near-data computations, memory subsystems maintaining data locality, and interconnect infrastructure facilitating data transfers. Processing-in-storage architectures demonstrate substantial energy savings by eliminating the need to move large datasets between storage devices and remote compute nodes, with studies indicating energy reductions of 30-60% compared to conventional approaches.
Modern computational storage devices leverage specialized processing units optimized for energy efficiency, including ARM-based processors, FPGA accelerators, and custom ASICs designed for specific workloads. These processing elements typically operate at lower clock frequencies and voltages compared to general-purpose CPUs, achieving better performance-per-watt ratios while maintaining sufficient computational capability for storage-centric operations.
Dynamic power management techniques play a crucial role in optimizing energy consumption across varying workload patterns. Advanced computational storage systems implement sophisticated power scaling mechanisms that adjust processor frequencies, memory refresh rates, and interconnect speeds based on real-time demand. These adaptive approaches can reduce idle power consumption by up to 70% during low-activity periods while maintaining rapid response capabilities.
Thermal management represents another significant aspect of energy efficiency, as computational storage devices must dissipate heat generated by both storage media and processing units within constrained form factors. Innovative cooling solutions, including advanced heat spreaders and intelligent thermal throttling algorithms, help maintain optimal operating temperatures while minimizing energy overhead from cooling systems.
The integration of emerging non-volatile memory technologies, such as 3D NAND and storage-class memory, further enhances energy efficiency by reducing write amplification and enabling more efficient data placement strategies. These technologies, combined with intelligent data management algorithms, contribute to overall system energy optimization while maintaining high-throughput performance characteristics essential for demanding storage applications.
Unlock deeper insights with Patsnap Eureka Quick Research — get a full tech report to explore trends and direct your research. Try now!
Generate Your Research Report Instantly with AI Agent
Supercharge your innovation with Patsnap Eureka AI Agent Platform!







