Abstract: Internet of Things (IoT) applications equip rural producers with decision support tools and automated solutions that boost agribusiness productivity, quality, and profit. However, most poultry farmers still use conventional methods of operation in which human workers carry out all routines for monitoring and controlling their farms at the expense of greater productivity. One of these human activities is manual weighing, which can be replaced by nonintrusive methods such as computational vision applications that estimate live poultry’s weight using video cameras. Since Internet of Things (IoT) devices may have low computing power limiting the ability to process the data locally, they can transfer it to a fog or cloud data center, where they are processed. This article aims to conduct a dependability study of a poultry house automated with a computer vision-based system for estimating poultry weight considering hierarchical models (e.g., Markov chain, reliability block diagram, and closed-form equation) to represent the whole system and obtain steady-state availability and annual downtime. In addition, our purpose is to consider and compare different architectural solutions, such as edge and fog computing-based solutions. The proposed solution verified that a cloud-based application with no redundancy has a downtime of 34.14% and 9.176% hours when considering a hot-standby redundancy strategy in the office node of a cloud solution.
A survey on reliability and availability modeling of edge, fog, and cloud computing
Abstract: During the past years, sending data to the cloud servers was a prominent trend, making the cloud computing paradigm dominate the technology landscape. However, the internet of things (IoT) is becoming a part of our daily environments, and it generates a large volume of data, which is creating uncontrollable delays. For the delay-sensitive and context-aware services, these uncontrollable delays may cause low reliability and availability for applications. To overcome these challenges, computing paradigms are moving from centralized cloud environments to the Edge of the networks. Several new computing paradigms, such as Edge and Fog computing, emerged to support delay-sensitive and context-aware services. By combining edge devices, fog servers, and cloud computing, companies can build a hierarchical IoT infrastructure, using Edge–Fog–Cloud orchestrated architecture to improve IoT environments’ performance, reliability, and availability. This paper presents a comprehensive survey on reliability and availability of Edge, Fog, and Cloud computing architectures. We first introduce and compare some related works about these paradigms and compare them to define the differences between Edge and Fog environments, since there is still some confusion about these terms. We also describe their taxonomy and how they link to each other. Finally, we draw some potential research directions that may help foster research efforts in this area.
Dependability Issues on an Internet Service Provider and availability study of autonomous systems
Abstract: The Internet is arguably the most important means of communication, as there is no business strategy without the Internet. The Internet Service Provider’s challenge is to ensure the high availability of services to meet customers’ expectations, guaranteeing that they will be available and ready for whatever may be the user’s interests may be. Every time the user tries to access the service or product, and it is unavailable, we have the characterization of the service as unavailability. In this article, we evaluate the ISP’s core availability, identify availability issues in the router component, and study CTMC and RBD models by performing a model validation experiment, executing a steady-state availability, and performing a sensitivity analysis. Hierarchical modeling strategies, (availability models combining reliability block diagrams (RBD) and continuous time Markov chain (CTMC)) were used, indicating the availability of the infrastructure. The critical component of the system was indicated through sensitivity analysis. We performed a model validation technique to demonstrate that the models represent the behavior of the real system. The results showed that the system availability is 0.99941, and the sensitive analysis indicated that if the system administrator optimized the ISP infrastructure in 50%, it would yield a yearly downtime reduction of 3.4 hours.
Performance and availability evaluation of the blockchain platform hyperledger fabric
Abstract: Through the blockchain-as-a-service paradigm, one can provide the infrastructure required to host blockchain-based applications regarding performance and dependability-related attributes. Many works evaluated issues and mitigated them to reach a high throughput or better downtime and availability indexes. However, to the best of our acknowledgment, studies regarding both characteristics are yet to be performed. This paper presents a performance evaluation of a private infrastructure hosting a blockchain-based application. As we monitored the system, we noticed some increase in resource consumption that may be associated with software aging issues on the hyperledger fabric platform or its basic components. Also, the impact of this resource increment on the probability of the system being operational has been evaluated. When consumption issues were considered, one of the transaction types increased the RAM consumption by almost 80% in less than 3 h, reducing the system availability to 98.17%. For scenarios without resource increment issues on the infrastructure, the availability reached 99.35%, with an annual downtime of 56.43 h.
The Mercury Environment: A Modeling Tool for Performance and Dependability Evaluation
Abstract: It is important to be able to judge the performance or dependability metrics of a system and often we do so by using abstract models even when the system is in the conceptual phase. Evaluating a system by performing measurements can have a high temporal and/or financial cost, which may not be feasible. Mathematical models can provide estimates about system behavior and we need tools supporting different types of formalisms in order to compute desired metrics. The Mercury tool enables a range of models to be created and evaluated for supporting performance and dependability evaluations, such as reliability block diagrams (RBDs), dynamic RBDs (DRBDs), fault trees (FTs), stochastic Petri nets (SPNs), continuous and discrete-time Markov chains (CTMCs and DTMCs), as well as energy flow models (EFMs). In this paper, we introduce recent enhancements to Mercury, namely new SPN simulators, support to prioritized timed transitions, sensitivity analysis evaluation, several improvements to the usability of the tool, and support to DTMC and FT formalisms.
Software Aging in Container-based Virtualization: An Experimental Analysis on Docker Platform
Abstract: Lightweight virtualization, and specifically containers, has become widespread in the information technology industry to provide an efficient operational environment for the execution of scalable services on the Internet. Containers rely on a set of technologies different from the features that enable hardware virtualization (i.e., hypervisor-based virtual machines). However, both types of virtualized environments are employed to host applications that will be accessible anytime, anywhere. Thus, they are prone to software aging, which usually affects systems that run for long time intervals. Researchers have identified software aging effects in distinct types of cloud computing environments and hypervisors over recent years. However, a few works have dealt with this phenomenon in container-based platforms. This paper presents an experimental analysis of the software aging effects observed on Docker platforms while also evaluating the fitness of a time-series model to predict resource consumption’s progress caused by software aging. We employ a stress test workload tailored for the scenario of containers arranged in a cluster managed by Docker Swarm. The obtained results indicate an increasing usage of resident memory, virtual memory, and CPU usage, as the system undergoes subsequent scale-up and scale-down operations. The quadratic trend model was the best fitting approach for predicting resident and virtual memory usage, with less than 5% of prediction error. The experimental approach presented here may help systems administrators to detect evidence of software aging in container-based environments, and allowing then to choose a proper method and time for deploying rejuvenation actions to mitigate the dependability issues raised in similar scenarios described here.
A Software Maintenance Methodology: An Approach Applied to Software Aging
Abstract: The increasing use of computational systems has highlighted concerns about attributes that may influence the quality of service, such as performance, availability, reliability, and maintenance capacity. Failures in the software development process may impact these attributes. Flawed code and overall software misdesign may cause internal errors, leading to system malfunction. Some errors might be identified and fixed during the software testing process. However, other errors may manifest only during the production stage. This is the case of the software aging phenomenon, which is related to the progressive degradation that a software performance or reliability suffers during its operational life. This paper proposes a methodology for software maintenance that is tailored to identify, correct, and mitigate the software aging effects. If the source code can be modified and a new version deployed with minimal impact, thus data from aging detection is used for corrective maintenance, i.e., for fixing the bug the causes the aging effects. If the software cannot be fixed nor its version updated without long system interruption or other bad consequences, then our approach can mitigate the aging effects, in a preventive maintenance to avoid service outages. The proposed methodology is validated through both Stochastic Petri Net (SPN) models and experiments in a controlled environment. The model evaluation considering a hybrid maintenance routine (preventive and corrective) yielded an availability of 99.82%, representing an annual downtime of 15.9 hours. By contrast, the baseline scenario containing only reactive maintenance (i.e., repairing only after failure) had more than 1342 hours of annual downtime - 80 times higher than the proposed approach.
Experimental Evaluation of Software Aging Effects in a Container-Based Virtualization Platform
Abstract: Cloud-based architectures have grown in recent years, especially the interest in container-based solutions have sharply increased by enterprises worldwide. Containers are a form of lightweight virtualization that can be used to provide cloud services. Adopting this kind of technology in a bare- metal context is becoming strong because they can offer many benefits, like performance efficiency and costs reduction. Docker is a widespread platform for the creation and management of containers. As in any computational cloud service, Docker environments must deal with the intensive workload and may have a long-term life cycle, which might trigger some problems that compromise the system dependability. The software aging phenomenon is one of these likely problems. It is a process of cumulative errors or system misbehavior that leads to application failures and performance degradation throughout its runtime. This paper aims to monitor and evaluate software aging effects on the Docker platform in a cloud computing environment. We conducted two experimental studies with automated workloads to simulate containers’ life cycle and the intensive use of Docker features, while the system was monitored. The results show high resource consumption by the operating system’s network utility, in addition to memory fragmentation in the sub-processes of the Docker platform. Trends of increasing resident memory consumption were also observed in one of these scenarios.
How to download youtube live streams?
Today I needed to download live stream videos from youtube (i.e., that are still live). To do this, I used the command-line utility Streamlink, which extracts streams from various services and pipes them into a video player of choice or a file.
Impactos da IoT na Avicultura: um Mapeamento Sistemático
Abstract: Brazil is today the largest exporter and the second largest producer worldwide of chicken meat. Despite such prominent position, the lack of data about the use of tools based on Internet of Things (IoT) at national poultry farming leads to the hypothesis that the most Brazilian poultry farmers still use conventional methods to the detriment of higher productivity. In this work, we mapped 17 publications from the international literature that show IoT-based solutions that pass through food safety, environmental production factors, traceability, and animal health. This way, we expect to contribute to a discussion between the computer and agribusiness scientific communities through a systematic review.