Top Research Papers: Confidential Computing & More (Dec 2025)

by Alex Johnson 62 views

Stay ahead of the curve with our curated collection of the latest research papers in the dynamic fields of confidential computing, serverless architectures, and container technology. This digest, updated as of December 3rd, 2025, brings you a concise overview of cutting-edge advancements and innovative approaches shaping the future of these critical areas. For an enhanced reading experience and access to even more papers, be sure to visit the Github page.

Confidential Computing: Protecting Data in Use

Confidential computing is rapidly becoming a cornerstone of modern security, ensuring data privacy even while it's being processed. This section highlights recent research exploring novel techniques and architectures for secure computation.

Confidential, Attestable, and Efficient Inter-CVM Communication with Arm CCA (2025-12-02)

This paper delves into secure communication methods between Confidential Virtual Machines (CVMs) utilizing Arm Confidential Computing Architecture (CCA). Arm CCA offers a hardware-based security foundation for isolating sensitive workloads, and this research builds upon that by exploring efficient and attestable communication protocols. Securing inter-CVM communication is paramount in distributed confidential computing environments, as it prevents unauthorized access and manipulation of data as it moves between different secure enclaves. The concepts discussed have significant implications for cloud computing, where multiple virtual machines often need to interact while maintaining confidentiality. The efficient and attestable nature of the proposed communication methods ensures that data transfer is not only secure but also verifiable, adding another layer of trust in the system. This research is particularly relevant in industries dealing with highly sensitive data, such as healthcare and finance, where data breaches can have severe consequences. Future research directions could involve extending these protocols to support more complex communication patterns and exploring integration with other security technologies, such as homomorphic encryption.

Fault-tolerant Mutual-Visibility: Complexity and Solutions for Grid-Like Networks (2025-12-01)

This 25-page paper with 3 figures and 1 table explores the complexities and solutions for achieving fault-tolerant mutual visibility in grid-like networks. Mutual visibility, a critical concept in distributed systems, ensures that all nodes in a network can reliably communicate and coordinate with each other. Fault tolerance adds another layer of complexity, as the system must continue to operate correctly even in the presence of node failures. Grid-like networks, with their regular and predictable structure, are common in many computing environments, including data centers and high-performance computing clusters. The research likely delves into the trade-offs between different fault-tolerance mechanisms and their impact on network performance. Understanding these trade-offs is crucial for designing robust and resilient distributed systems that can withstand various failure scenarios. The solutions proposed in the paper could involve techniques such as redundancy, error detection, and recovery mechanisms. The complexity analysis likely considers factors such as network size, connectivity, and the probability of node failures. This research is highly relevant to the design and operation of critical infrastructure systems, where reliability and availability are paramount.

The Beginner's Textbook for Fully Homomorphic Encryption (2025-12-01)

This paper, a comprehensive guide to Fully Homomorphic Encryption (FHE), represents a significant milestone in making this advanced cryptographic technique accessible to a wider audience. Fully Homomorphic Encryption (FHE) allows computations to be performed on encrypted data without decrypting it first, a groundbreaking capability that opens up new possibilities for secure data processing. This textbook-style paper likely covers the fundamental concepts of FHE, various FHE schemes, and their applications. It may also include examples and exercises to help beginners grasp the complexities of this field. The importance of FHE lies in its potential to revolutionize data privacy in various domains, including cloud computing, healthcare, and finance. By enabling computations on encrypted data, FHE eliminates the risk of data breaches during processing, a major concern in today's interconnected world. The availability of a beginner's textbook is a crucial step in fostering the adoption of FHE by making it easier for researchers and practitioners to learn and implement this powerful technology. Future developments in FHE could involve improving the efficiency of FHE schemes and exploring new applications in emerging areas such as artificial intelligence and blockchain.

Extended Abstract: Synthesizable Low-overhead Circuit-level Countermeasures and Pro-Active Detection Techniques for Power and EM SCA (2025-11-29)

This extended abstract, archived for educational purposes, focuses on low-overhead circuit-level countermeasures and proactive detection techniques for power and electromagnetic side-channel attacks (SCA). Side-channel attacks exploit information leaked during the execution of cryptographic algorithms, such as power consumption and electromagnetic emissions. This extended abstract likely outlines novel methods for mitigating these attacks at the circuit level, which is a critical aspect of hardware security. The low-overhead nature of the countermeasures is essential for practical applications, as they should not significantly impact the performance or cost of the hardware. Proactive detection techniques aim to identify and prevent SCAs before they can be successfully launched. This abstract, being an example for PhD forum competitions, likely presents a high-level overview of the research, highlighting the key contributions and potential impact. Future research in this area could involve developing more sophisticated SCA countermeasures, exploring new detection techniques, and validating the effectiveness of these methods against real-world attacks.

DeID-GPT: Zero-shot Medical Text De-Identification by GPT-4 (2025-11-28)

This research introduces DeID-GPT, a novel approach to medical text de-identification using GPT-4 in a zero-shot setting. De-identification is the process of removing personally identifiable information (PII) from text data, which is crucial for protecting patient privacy in healthcare. GPT-4, a powerful large language model, is leveraged to automatically de-identify medical text without requiring any training data. This zero-shot capability is particularly valuable in scenarios where labeled data is scarce or unavailable. The paper likely evaluates the performance of DeID-GPT on various medical text datasets and compares it to existing de-identification methods. The results could demonstrate the effectiveness of GPT-4 in this task and highlight the potential of large language models for privacy-preserving natural language processing. Future research could involve refining DeID-GPT's performance, exploring its application to other languages and text types, and developing robust evaluation metrics for de-identification systems.

Differentially Private Fisher Randomization Tests for Binary Outcomes (2025-11-25)

This 39-page paper with 7 figures presents differentially private Fisher randomization tests for binary outcomes. Differential privacy is a rigorous mathematical framework for protecting the privacy of individuals in statistical analyses. Fisher randomization tests are a non-parametric method for hypothesis testing, commonly used in clinical trials and other scientific studies. This research likely develops new algorithms and techniques for performing Fisher randomization tests while preserving differential privacy. The paper may also analyze the trade-offs between privacy and statistical power, which is a key consideration in differentially private statistical analysis. The findings of this research could be highly relevant to researchers in various fields who need to analyze sensitive data while adhering to privacy regulations. Future research could involve extending these methods to other types of statistical tests and exploring their application in real-world datasets.

Differentially Private Computation of the Gini Index for Income Inequality (2025-11-24)

This paper focuses on the differentially private computation of the Gini index, a widely used measure of income inequality. The Gini index is a statistical measure that quantifies the disparity in income distribution within a population. Calculating this index in a privacy-preserving manner is crucial for ensuring the confidentiality of individuals' financial data. This research likely develops new algorithms for computing the Gini index under differential privacy, while minimizing the loss of accuracy. The paper may also analyze the privacy-utility trade-offs of different approaches. The results of this work are relevant to policymakers and researchers who need to analyze income inequality data without compromising privacy. Future research could involve extending these methods to other measures of inequality and exploring their application in different economic contexts.

Confidential Prompting: Privacy-preserving LLM Inference on Cloud (2025-11-19)

This paper introduces Confidential Prompting, a novel approach to privacy-preserving inference with large language models (LLMs) on cloud platforms. Large Language Models (LLMs) have become increasingly powerful tools for natural language processing, but their use in cloud environments raises privacy concerns. Confidential Prompting aims to protect the privacy of user prompts and the LLM's responses during inference. This research likely leverages confidential computing techniques, such as secure enclaves, to isolate the LLM and the user's data from the cloud provider. The paper may also explore cryptographic methods for encrypting prompts and responses. The results of this work could significantly enhance the privacy of LLM-based services in the cloud. Future research could involve optimizing the performance of Confidential Prompting and exploring its application to other types of machine learning models.

A Fuzzy Logic-Based Cryptographic Framework For Real-Time Dynamic Key Generation For Enhanced Data Encryption (2025-11-18)

This paper presents a fuzzy logic-based cryptographic framework for real-time dynamic key generation to enhance data encryption. Dynamic key generation is a crucial security mechanism that involves generating new encryption keys frequently, making it more difficult for attackers to compromise the system. Fuzzy logic is a form of reasoning that allows for uncertainty and imprecision, which can be useful in generating unpredictable keys. This research likely develops a novel cryptographic framework that leverages fuzzy logic to dynamically generate encryption keys in real-time. The paper may also analyze the security properties of the framework and compare it to existing key generation methods. The results of this work could lead to more robust and secure data encryption systems. Future research could involve exploring the application of this framework in different security domains and optimizing its performance for various hardware platforms.

Linearly Homomorphic Ring Signature Scheme over Lattices (2025-11-17)

This research paper introduces a Linearly Homomorphic Ring Signature (LHRS) scheme over lattices. Ring signatures are a type of digital signature that allows a user to sign a message on behalf of a group of users without revealing which member of the group actually signed the message. Linear homomorphism adds another layer of functionality, allowing computations to be performed on the signatures themselves. Lattices are a mathematical structure that is widely used in cryptography due to their resistance to quantum attacks. This paper likely presents a novel LHRS scheme based on lattice cryptography. The scheme could offer enhanced security and privacy features compared to traditional signature schemes. Future research in this area might explore the practical applications of this scheme and investigate its performance characteristics.

A Workflow for Full Traceability of AI Decisions (2025-11-17)

This paper, spanning 10 pages and featuring 10 figures, introduces a workflow designed to ensure full traceability of decisions made by artificial intelligence (AI) systems. AI traceability is becoming increasingly important as AI systems are deployed in critical applications, such as healthcare and finance. Traceability allows for the reconstruction of the decision-making process, enabling auditing, debugging, and accountability. The workflow likely outlines the steps involved in capturing and storing information about AI decisions, including the data used, the algorithms applied, and the reasoning process. The paper may also discuss the tools and technologies required to implement this workflow. Full traceability of AI decisions can enhance trust and transparency in AI systems. Future work in this area may focus on developing standardized traceability frameworks and tools.

zkSTAR: A Zero Knowledge System for Time Series Attack Detection Enforcing Regulatory Compliance in Critical Infrastructure Networks (2025-11-14)

This paper presents zkSTAR, a zero-knowledge system for time series attack detection that enforces regulatory compliance in critical infrastructure networks. Zero-knowledge proofs are a cryptographic technique that allows one party to prove to another party that a statement is true without revealing any information beyond the validity of the statement itself. Time series data is commonly used to monitor the performance of critical infrastructure networks, such as power grids and water distribution systems. Detecting attacks on these networks is crucial for maintaining their reliability and security. zkSTAR likely utilizes zero-knowledge proofs to ensure the privacy of the time series data while enabling attack detection. The system also enforces regulatory compliance, which is essential for many critical infrastructure operators. This research contributes to the development of secure and privacy-preserving monitoring systems for critical infrastructure. Future research might focus on enhancing the performance and scalability of zkSTAR.

Securing Generative AI in Healthcare: A Zero-Trust Architecture Powered by Confidential Computing on Google Cloud (2025-11-14)

This 19-page paper with 1 figure and 1 table explores the use of a zero-trust architecture powered by confidential computing on Google Cloud to secure generative AI in healthcare. Generative AI models have the potential to revolutionize healthcare, but they also raise security and privacy concerns. A zero-trust architecture assumes that no user or device is trusted by default, requiring strict authentication and authorization. Confidential computing can protect sensitive healthcare data during processing. This research likely describes a specific implementation of a zero-trust architecture for securing generative AI models in a healthcare setting, leveraging Google Cloud's confidential computing capabilities. The results of this work can provide valuable guidance for healthcare organizations looking to adopt generative AI technologies securely. Future research could involve evaluating the performance and scalability of this architecture in real-world healthcare environments.

Experiences Building Enterprise-Level Privacy-Preserving Federated Learning to Power AI for Science (2025-11-12)

This paper shares experiences in building an enterprise-level privacy-preserving federated learning system to power AI for science. Federated learning is a machine learning technique that enables training models on decentralized data sources without sharing the data itself. This is particularly useful in scientific domains where data may be sensitive or subject to privacy regulations. This research likely describes the challenges and lessons learned in building a federated learning system for a specific scientific application. The paper may also discuss the privacy-preserving techniques used and their impact on model accuracy. The insights shared in this paper can be valuable for organizations looking to implement federated learning in their own scientific endeavors. Future research could focus on developing more efficient and scalable federated learning algorithms and platforms.

Confidentiality in a Card-Based Protocol Under Repeated Biased Shuffles (2025-11-07)

This 17-page paper with 2 figures examines the confidentiality of a card-based protocol under repeated biased shuffles. Card-based protocols are cryptographic protocols that use physical cards to perform secure computations. Biased shuffles are a type of shuffling that does not distribute cards uniformly, which can introduce vulnerabilities in card-based protocols. This research likely analyzes the impact of repeated biased shuffles on the confidentiality of a specific card-based protocol. The paper may also propose countermeasures to mitigate these vulnerabilities. The results of this work can contribute to the design of more secure card-based protocols. Future research could involve developing new card-based protocols with enhanced security properties.

Serverless: The Future of Cloud Computing

Serverless computing is a cloud execution model where the cloud provider dynamically manages the allocation of computing resources. This section explores the latest advancements in serverless technologies, including performance optimization and novel applications.

Tangram: Accelerating Serverless LLM Loading through GPU Memory Reuse and Affinity (2025-12-01)

This paper introduces Tangram, a system designed to accelerate the loading of large language models (LLMs) in serverless environments by leveraging GPU memory reuse and affinity. Serverless computing is well-suited for LLM inference due to its scalability and pay-per-use model, but the cold start latency associated with loading LLMs can be a bottleneck. Tangram likely employs techniques to cache LLM weights in GPU memory and ensure that subsequent invocations of the serverless function reuse the same GPU, thereby reducing loading time. The paper probably presents performance evaluations demonstrating the effectiveness of Tangram in reducing cold start latency and improving overall throughput. This research is critical for enabling the efficient deployment of LLMs in serverless applications. Future work could explore the integration of Tangram with different serverless platforms and LLM frameworks.

SlsReuse: LLM-Powered Serverless Function Reuse (2025-11-21)

This paper presents SlsReuse, a system that leverages large language models (LLMs) to enhance serverless function reuse. Function reuse is a key optimization technique in serverless computing, as it reduces the overhead of deploying and invoking new functions. SlsReuse likely uses LLMs to analyze function code and identify opportunities for reuse, such as when different functions perform similar tasks. The paper might describe the architecture of SlsReuse and present experimental results showing its effectiveness in improving serverless application performance. This research contributes to making serverless computing more efficient and cost-effective. Future directions could involve developing more sophisticated LLM-based techniques for function reuse and exploring the application of SlsReuse in different serverless environments.

Combining Serverless and High-Performance Computing Paradigms to Support ML Data-Intensive Applications (2025-11-15)

This 12-page paper with 9 figures and 3 tables explores the combination of serverless and high-performance computing (HPC) paradigms to support machine learning (ML) data-intensive applications. Serverless computing offers scalability and ease of use, while HPC provides the computational power needed for complex ML tasks. This research likely investigates how to effectively integrate these two paradigms to leverage their respective strengths. The paper may present a system architecture or a programming model that enables the seamless execution of ML workflows across serverless and HPC resources. It could also include performance evaluations demonstrating the benefits of this hybrid approach. This research is relevant to a wide range of ML applications, such as scientific simulations and data analytics. Future work could focus on optimizing the communication and data transfer between serverless and HPC environments.

GraphFaaS: Serverless GNN Inference for Burst-Resilient, Real-Time Intrusion Detection (2025-11-13)

This paper introduces GraphFaaS, a system for serverless graph neural network (GNN) inference designed for burst-resilient, real-time intrusion detection. GNNs are a powerful class of machine learning models that are well-suited for analyzing graph-structured data, such as network traffic. Serverless computing provides the scalability and elasticity needed to handle bursts of traffic in intrusion detection systems. GraphFaaS likely leverages serverless functions to perform GNN inference on network traffic data in real-time. The paper may describe the architecture of GraphFaaS and present experimental results demonstrating its performance and scalability. This research contributes to the development of more effective and efficient intrusion detection systems. Future work could involve exploring the application of GraphFaaS in other security domains.

Saarthi: An End-to-End Intelligent Platform for Optimising Distributed Serverless Workloads (2025-11-10)

This 12-page paper with 9 figures, 1 table, and 2 algorithms presents Saarthi, an end-to-end intelligent platform for optimizing distributed serverless workloads. Serverless applications often consist of multiple functions that interact with each other, creating complex workflows. Saarthi likely provides tools and techniques for monitoring, analyzing, and optimizing these workflows. The platform may use machine learning to predict workload patterns and automatically adjust resource allocation. The paper could describe the architecture of Saarthi and present experimental results showing its effectiveness in improving serverless application performance and cost efficiency. This research contributes to making serverless computing more manageable and scalable. Future work might focus on extending Saarthi to support different serverless platforms and workload types.

Gaia: Hybrid Hardware Acceleration for Serverless AI in the 3D Compute Continuum (2025-11-01)

This paper, presented at the IEEE ACM 12th International Conference on Big Data Computing, Applications and Technologies (BDCAT 25), explores hybrid hardware acceleration for serverless AI in the 3D compute continuum. The 3D compute continuum refers to the integration of computing resources across different layers of the infrastructure, from edge devices to the cloud. Gaia likely proposes a system that leverages different types of hardware accelerators, such as GPUs and FPGAs, to optimize the performance of serverless AI applications. The paper may describe the architecture of Gaia and present experimental results demonstrating its benefits. This research contributes to making AI applications more efficient and scalable in serverless environments. Future work could focus on developing more sophisticated hardware-aware scheduling and resource management techniques.

Fix: Externalizing Network I/O in Serverless Computing (2025-10-31)

This paper, to appear in the 21st European Conference on Computer Systems (EuroSys 26), focuses on externalizing network I/O in serverless computing. Network I/O is a critical performance bottleneck in many serverless applications. Fix likely proposes a novel approach to offload network I/O operations from serverless functions to a separate service, thereby reducing the function's execution time. The paper may describe the architecture of Fix and present experimental results showing its effectiveness in improving serverless application performance. This research contributes to making serverless computing more suitable for network-intensive applications. Future work could involve exploring different techniques for network I/O externalization and optimizing the communication between serverless functions and the external I/O service.

Odyssey: An End-to-End System for Pareto-Optimal Serverless Query Processing (2025-10-29)

This paper introduces Odyssey, an end-to-end system for Pareto-optimal serverless query processing. Pareto optimality refers to a state where it is impossible to improve one objective without worsening another. In the context of serverless query processing, the objectives could be query latency, cost, and resource consumption. Odyssey likely employs techniques to optimize these objectives simultaneously, providing users with a set of Pareto-optimal query execution plans. The paper may describe the architecture of Odyssey and present experimental results demonstrating its effectiveness. This research contributes to making serverless computing more efficient and cost-effective for data analytics applications. Future work could focus on extending Odyssey to support different query languages and data formats.

Roadrunner: Accelerating Data Delivery to WebAssembly-Based Serverless Functions (2025-10-25)

This paper, presented at the 26th International Middleware Conference (Middleware 25), introduces Roadrunner, a system for accelerating data delivery to WebAssembly-based serverless functions. WebAssembly is a portable binary instruction format that is gaining popularity in serverless computing due to its performance and security benefits. Roadrunner likely optimizes the data transfer between data sources and WebAssembly functions, reducing latency and improving overall application performance. The paper may describe the architecture of Roadrunner and present experimental results demonstrating its effectiveness. This research contributes to making WebAssembly a more viable platform for serverless computing. Future work could involve exploring different data delivery techniques and optimizing Roadrunner for various data source types.

ProFaaStinate: Delaying Serverless Function Calls to Optimize Platform Performance (2025-10-24)

This paper, accepted for publication in the Proceedings of the 9th International Workshop on Serverless Computing (WoSC 23), presents ProFaaStinate, a system that delays serverless function calls to optimize platform performance. Delaying function calls can improve platform performance by batching requests, reducing overhead, and enabling more efficient resource utilization. ProFaaStinate likely uses intelligent scheduling algorithms to determine when to delay function calls without significantly impacting application latency. The paper may describe the architecture of ProFaaStinate and present experimental results demonstrating its benefits. This research contributes to making serverless platforms more scalable and efficient. Future work could focus on developing more sophisticated delay scheduling algorithms and exploring the application of ProFaaStinate in different serverless environments.

GeoFF: Federated Serverless Workflows with Data Pre-Fetching (2025-10-23)

This paper introduces GeoFF, a system for federated serverless workflows with data pre-fetching. Federated workflows involve executing tasks across multiple serverless platforms or regions, which can be useful for geographically distributed data or applications. Data pre-fetching can improve performance by proactively transferring data to the serverless functions before they need it. GeoFF likely combines these two techniques to enable efficient execution of federated workflows. The paper may describe the architecture of GeoFF and present experimental results demonstrating its benefits. This research contributes to making serverless computing more suitable for global-scale applications. Future work could involve developing more sophisticated data pre-fetching strategies and optimizing the communication between serverless platforms.

Serverless GPU Architecture for Enterprise HR Analytics: A Production-Scale BDaaS Implementation (2025-10-22)

This 10-page paper with 7 figures and 4 tables presents a serverless GPU architecture for enterprise HR analytics, showcasing a production-scale Big Data as a Service (BDaaS) implementation. GPUs are increasingly used in data analytics due to their ability to accelerate computationally intensive tasks. Serverless computing provides a cost-effective way to utilize GPUs for analytics workloads. This research likely describes a specific implementation of a serverless GPU architecture for HR analytics, highlighting its performance, scalability, and cost efficiency. The paper may also discuss the challenges and lessons learned in building and deploying this system in a production environment. This research contributes to making serverless GPU computing more accessible to enterprises. Future work could focus on optimizing the architecture for different types of analytics workloads.

The Hidden Dangers of Public Serverless Repositories: An Empirical Security Assessment (2025-10-20)

This paper, accepted at ESORICS 2025, presents an empirical security assessment of public serverless repositories. Public serverless repositories are online platforms where developers can share and reuse serverless functions. While these repositories can promote code reuse and collaboration, they also pose security risks if they contain vulnerabilities or malicious code. This research likely analyzes a large number of serverless functions in public repositories to identify common vulnerabilities and security weaknesses. The paper may also discuss the potential impact of these vulnerabilities and propose countermeasures to mitigate them. This research contributes to improving the security of serverless ecosystems. Future work could involve developing automated tools for vulnerability detection in serverless functions.

Object as a Service: Simplifying Cloud-Native Development through Serverless Object Abstraction (2025-10-20)

This paper introduces Object as a Service (OaaS), a concept that aims to simplify cloud-native development through serverless object abstraction. Cloud-native development involves building applications specifically for cloud environments, leveraging services such as serverless computing and containerization. OaaS likely provides a high-level abstraction for managing and accessing objects in the cloud, making it easier for developers to build complex applications. The paper may describe the architecture of OaaS and discuss its benefits for cloud-native development. This research contributes to making cloud computing more accessible and developer-friendly. Future work could focus on developing specific OaaS implementations and evaluating their performance and usability.

FlexPipe: Adapting Dynamic LLM Serving Through Inflight Pipeline Refactoring in Fragmented Serverless Clusters (2025-10-13)

This paper, presented at EuroSys 26, introduces FlexPipe, a system that adapts dynamic LLM serving through inflight pipeline refactoring in fragmented serverless clusters. Fragmented serverless clusters refer to serverless environments where resources are distributed across multiple regions or availability zones. FlexPipe likely optimizes the serving of large language models (LLMs) in these environments by dynamically adjusting the pipeline of operations based on workload patterns and resource availability. The paper may describe the architecture of FlexPipe and present experimental results demonstrating its effectiveness. This research contributes to making serverless computing more suitable for LLM serving. Future work could focus on developing more sophisticated pipeline refactoring algorithms and optimizing FlexPipe for different LLM architectures.

Container Technology: The Building Blocks of Modern Applications

Container technology has revolutionized software deployment and management, providing a lightweight and portable way to package applications. This section highlights recent research in container security, orchestration, and optimization.

BEACON: Automatic Container Policy Generation using Environment-aware Dynamic Analysis (2025-11-29)

This paper presents BEACON, a system designed for automatic container policy generation using environment-aware dynamic analysis. Container policies are security rules that control the behavior of containers, such as network access and file system permissions. Manually creating and managing these policies can be complex and error-prone. BEACON likely uses dynamic analysis to observe the runtime behavior of containers and automatically generate appropriate security policies. The system is also environment-aware, meaning it considers the specific context in which the container is running. The research probably details BEACON's architecture and experimental evaluations, showing its effectiveness in improving container security. Future work might focus on incorporating machine learning techniques for policy generation and adapting to evolving container environments.

Controller-Light CI/CD with Jenkins: Remote Container Builds and Automated Artifact Delivery (2025-11-07)

This paper explores Controller-Light Continuous Integration/Continuous Delivery (CI/CD) with Jenkins, focusing on remote container builds and automated artifact delivery. CI/CD is a software development practice that automates the process of building, testing, and deploying applications. Jenkins is a popular open-source CI/CD tool. Controller-Light CI/CD likely refers to a setup where the Jenkins controller has minimal resource requirements, with build tasks executed remotely in containers. This research likely details the configuration and benefits of such a setup, highlighting the efficiency and scalability gains. The paper could provide practical guidance and best practices for implementing Controller-Light CI/CD with Jenkins. Future research could investigate further optimizations and integration with other DevOps tools.

Adaptive-Sensorless Monitoring of Shipping Containers (2025-11-04)

This paper, published in 2025 IEEE Big Data, presents an adaptive-sensorless monitoring system for shipping containers. Monitoring shipping containers is crucial for ensuring the security and integrity of goods during transportation. Traditional monitoring systems often rely on physical sensors, which can be expensive and prone to failure. This research likely proposes a sensorless approach that uses existing data sources, such as GPS and weather data, to infer the condition and location of containers. Adaptive monitoring implies that the system adjusts its monitoring strategy based on the specific characteristics and risks associated with each container. The paper would probably detail the algorithms and techniques used for sensorless monitoring and present experimental results. Future work might focus on incorporating machine learning for predictive maintenance and risk assessment.

Fast and Robust Point Containment Queries on Trimmed Surfaces (2025-10-29)

This paper focuses on fast and robust point containment queries on trimmed surfaces. Point containment queries are a fundamental operation in computer graphics and geometric modeling, used to determine whether a point lies inside a given surface. Trimmed surfaces are surfaces with holes or boundaries, which add complexity to the containment query problem. This research likely presents new algorithms and data structures for efficiently answering point containment queries on trimmed surfaces. The paper might discuss the trade-offs between different approaches in terms of performance and robustness. The results of this work could have applications in various fields, including CAD/CAM, virtual reality, and computer games. Future research could focus on extending these techniques to more complex geometric shapes and higher dimensions.

HGraphScale: Hierarchical Graph Learning for Autoscaling Microservice Applications in Container-based Cloud Computing (2025-10-23)

This paper introduces HGraphScale, a hierarchical graph learning approach for autoscaling microservice applications in container-based cloud computing environments. Microservices are a popular architectural style for building scalable and resilient applications. Autoscaling automatically adjusts the number of microservice instances based on workload demands. HGraphScale likely uses graph learning techniques to model the dependencies and interactions between microservices and make informed autoscaling decisions. The paper may describe the architecture of HGraphScale and present experimental results demonstrating its effectiveness in improving application performance and resource utilization. This research contributes to making microservice architectures more manageable and efficient. Future work could focus on incorporating predictive autoscaling and optimizing for different cloud platforms.

The Atomic Instruction Gap: Instruction-Tuned LLMs Struggle with Simple, Self-Contained Directives (2025-10-20)

This 11-page paper with 1 figure and 8 tables explores the limitations of instruction-tuned large language models (LLMs) in handling simple, self-contained directives, referred to as the