The paper presents a method for quantitative assessment of zigzag trajectories of vehicles, which allows to identify potentially dangerous behavior of drivers. The algorithm analyzes changes in direction between trajectory segments and includes data preprocessing steps: merging of closely spaced points and trajectory simplification using a modified Ramer-Douglas-Pecker algorithm. Experiments on a balanced data set (20 trajectories) confirmed the effectiveness of the method: accuracy - 0.8, completeness - 1.0, F1-measure - 0.833. The developed approach can be applied in traffic monitoring, accident prevention and hazardous driving detection systems. Further research is aimed at improving the accuracy and adapting the method to real-world conditions.
Keywords: trajectory, trajectory analysis, zigzag, trajectory simplification, Ramer-Douglas-Pecker algorithm, yolo, object detection
The paper addresses the problem of the technical diagnostics of hump control devices, such as wagon retarders. The current analytical methods of monitoring and technical diagnostics of wagon retarder conditions are reviewed. The factors that are used in the existing diagnostics systems are analyzed and new factors to be taken into account, including specific pathway peculiarities, wagon group lengths, breaking curve styles, initial wagon group speed and environment conditions, are suggested. The suggested set of factors are characterized from the point of regression analysis. The replacement of some continuous factors with lexical ones are suggested. Decision tree-based classificators are suggested to perform the classification of hump retarder conditions. The decision tree-based classificators can be built with the means of Data Mining on a training set. An improved method of building decision trees is suggested. It’s advantage over the existing algorithms is shown on evaluation sets.
Keywords: hump yard, wagon retarders, regression, decision trees, classification, data mining, multi-factor analysis, soft computations
In this paper, a new model of an open multichannel queuing system with mutual assistance between channels and limited waiting time for a request in a queue is proposed. General mathematical dependencies for the probabilistic characteristics of such a system are presented.
Keywords: queuing system, queue, service device, mutual assistance between channels
Currently, key aspects of software development include the security and efficiency of the applications being created. Special attention is given to data security and operations involving databases. This article discusses methods and techniques for developing secure applications through the integration of the Rust programming language and the PostgreSQL database management system (DBMS). Rust is a general-purpose programming language that prioritizes safety as its primary objective. The article examines key concepts of Rust, such as strict typing, the RAII (Resource Acquisition Is Initialization) programming idiom, macro definitions, and immutability, and how these features contribute to the development of reliable and high-performance applications when interfacing with databases. The integration with PostgreSQL, which has been demonstrated to be both straightforward and robust, is analyzed, highlighting its capacity for efficient data management while maintaining a high level of security, thereby mitigating common errors and vulnerabilities. Rust is currently used less than popular languages like JavaScript, Python, and Java, despite its steep learning curve. However, major companies see its potential. Rust modules are being integrated into operating system kernels (Linux, Windows, Android), Mozilla is developing features for Firefox's Gecko engine and StackOverflow surveys show a rising usage of Rust. A practical example involving the dispatch of information related to class schedules and video content illustrates the advantages of utilizing Rust in conjunction with PostgreSQL to create a scheduling management system, ensuring data integrity and security.
Keywords: Rust programming language, memory safety, RAII, metaprogramming, DBMS, PostgreSQL
The article considers a variant of constructing a model of a solar battery orientation drive based on a DC motor and PID control. Orientation in space is performed along two axes: azimuth and zenith. The model is used for optimal adjustment of PID controller parameters when processing the required orientation angles under gusty wind conditions. The following are used as the main adjustment criteria: small overshoot when processing the angle, aperiodic (non-oscillatory) nature of transient processes, minimum dynamic error in compensating for wind effects when processing the angle, minimum settling time when processing the effect. The controller was optimized using the coordinate descent method. A variant of controller adjustment for the optimal mode is given with process graphs confirming its practical optimality. The constructed drive model can be used to implement a digital twin of the solar battery panel orientation drive monitoring and control system.
Keywords: mathematical model of the drive, PID controller, solar panel, gusty wind effects, azimuth and zenith orientation, optimization by complex criterion
This article discusses the implementation features of a model for recognizing apple tree diseases by leaves. In the course of the work, a number of experiments were conducted with known convolutional network architectures (ResNet50, VGG16, and MobileNet). It was found that the accuracy decreases from ResNet50 to MobileNet, respectively. The effect of changing network parameters on accuracy is considered - the number of layers, batch normalization. A description of the practical experience of building our own model is given, which was also studied for changing parameters, such as the number of hidden layers. The network shows the best result with 3 and 4 layers. The nature of the learning ability of models from the number of layers and the range of epochs is studied. In early epochs, curves with a small number of layers show a rapid increase in accuracy, and curves with a large number are trained more slowly, which is probably due to the effect of gradient attenuation. In particular, it is shown that batch normalization or network deepening can positively affect such an effect as gradient attenuation. The use of batch normalization gave a pronounced effect of improving the network in the range from 35 to 50 epochs. For clarity, the work includes graphs of neural network training. In the process of work, conclusions and recommendations were formed on how to build a network more effectively.
Keywords: artificial intelligence, computer vision, neural networks, deep learning, machine learning
This article provides an overview of existing structural solutions for in-line robots designed for inspection work. The main attention is paid to the analysis of various motion mechanisms and chassis types used in such robots, as well as to the identification of their advantages and disadvantages in relation to the task of scanning a longitudinal weld. Such types of robots as tracked, wheeled, helical and those that move under the influence of pressure inside the pipe are considered. Special attention is paid to the problem of ensuring stable and accurate movement of the robot along the weld, minimizing lateral displacements and choosing the optimal positioning system. Based on the analysis, recommendations are offered for choosing the most appropriate type of motion and chassis to perform the task of constructing a 3D model of a weld using a laser triangulation sensor (hereinafter referred to as LTD).
Keywords: in-line work, inspection work, 3D scanning, welds, structural solutions, types of movement, chassis, crawler robots, wheeled robots, screw robots, longitudinal welds, laser triangulation sensor
The railway transport industry demonstrates significant achievements in various fields of activity through the introduction of predictive analytics. Predictive analytics systems use data from a variety of sources, such as sensor networks, historical data, weather conditions, etc. The article discusses the key areas of application of predictive analytics in railway transport, as well as the advantages, challenges and prospects for further development of this technology in the railway infrastructure.
Keywords: predictive analytics in railway transport, passenger traffic forecasting, freight optimization, maintenance optimization, inventory and supply management, personnel management, financial planning, big data analysis
A Simulink model is considered that allows calculating transient processes of objects described using a transient function for any type of input action. An algorithm for the operation of the S-function that performs calculations using the Duhamel integral is described. It is shown that due to the features of the S-function, it can store the values of the previous step of the Simulink model calculation. This allows the input signal to be decomposed into step components and the time of occurrence of each step and its value to be stored. For each step of the input signal increment, the S-function calculates the response by scaling the transient response. Then, at each step of the calculation, the sum of such reactions is found. The S-function provides a procedure for freeing memory when the end point of the transient response is reached at each step. Thus, the amount of memory required for the calculation does not increase above a certain limit, and, in general, does not depend on the length of the model time. For calculations, the S-function uses matrix operations and does not use cycles. Due to this, the speed of model calculation is quite high. The article presents the results of calculations. Recommendations are given for setting the parameters of the model. A conclusion is formulated on the possibility of using the model for calculating dynamic modes.
Keywords: simulation modeling, Simulink, step response, step function, S-function, Duhamel integral.
This paper presents a simulation of square bar extrusion using the finite element method in QForm software. The sequence of simulation stages is described and verified using an example of extruding AD31 alloy. The geometry of the extrusion tool (die) was improved to reduce material damage and eliminate profile distortion. It was found that rounding the corners and adding a calibrating section to the die provides improved performance. The analysis of metal heating from plastic deformation is also provided. The study highlights the importance of computer modeling for optimizing tools and increasing the efficiency of the metal forming process. Specifically, the initial and final die geometries were compared to assess the impact on product properties. Recommendations are provided for using simulation in the practice of developing metal forming process technologies. The benefits of using QForm software over alternative solutions are highlighted, including access to a comprehensive material database and robust technical support. The improved die design resulted in reduced plastic deformation and improved product quality. Finite element analysis significantly accelerates the development and testing of tools, providing the ability to develop optimal designs that account for various factors.
Keywords: extrusion, die design, finite element method, computer modeling, optimization, metal forming, aluminum alloy, process simulation
This article examines the use of a dual-circuit closed-loop system for pulsation combustion of solid fuel in agricultural services. The results of research on gas self-oscillations during wood waste combustion in a Helmholtz resonator-type installation are presented. It has been established that with certain geometric characteristics of the installation, a significant sound pressure level (up to 165 dB) can be achieved in the combustion chamber while maintaining a low noise level (up to 65 dB), which contributes to increased combustion efficiency and meets environmental requirements. Potential applications of this technology are proposed, including agricultural waste utilization, drying of agricultural products, and heating of greenhouse complexes and livestock facilities.
Keywords: Pulsation combustion, Helmholtz resonator, solid fuel, agricultural waste, energy efficiency, biomass utilization, agro-industrial complex.
A study of heat transfer in porous heat exchangers with a sediment of dust particles was carried out. The influence of heat exchanger length and the presence of sediment on the Nusselt number was studied. It was revealed that increasing the length of the heat exchanger from 5 to 30 mm leads to an increase in the Nusselt number by 39.72-81.35% depending on the Reynolds number. The formation of sediment on the surface of the heat exchanger leads to a decrease in the Nusselt number by 2.8-6.6%.
Keywords: heat transfer, hydrodynamics, calculation, Nusselt number, Reynolds number, mathematical modeling, sediment formation, microelectronics, cooling systems, radiator
The article discusses the basic model of the formation of an emulsion layer on a rotating cylinder. Special attention is paid to the study of the influence of various parameters on the internal characteristics of the emulsion formation process. A mathematical description of the displacement layer is given and functional dependencies between the parameters characterizing the emulsion formation process are derived. Based on experimental data, the qualitative influence of "internal" and "external" factors on the formation of the emulsion layer has been studied. The result of the study were graphs, analyzing which the following conclusions can be drawn. An increase in the viscosity of the emulsion leads to a decrease in such parameters as the boundary of the fracture region of a more viscous liquid layer adjacent to the surface of the rotating cylinder, the viscosity of the emulsion in the transition layer, but there is an increase in the consumption of the emulsion. It is also established that the growth of the complex of "external" parameters leads to a decrease in all internal parameters during the formation of the emulsion. The research conducted in the article will help to take into account the obtained dependencies when calculating the operating and design parameters of devices.
Keywords: emulsion layer, viscosity, density, emulsion, rotating cylinder, liquid, emulsion composition
The paper is devoted to the application of a machine learning model with reinforcement for automating the planning of the deployment of logging sites in forestry. A method for optimizing the selection of cutting areas based on the algorithm of optimization of the Proximal Policy Optimization is proposed. An information system adapted for processing forest management data in a matrix form and working with geographic information systems has been developed. The experiments conducted demonstrate the ability to find rational options for the placement of cutting areas using the proposed method. The results obtained are promising for the use of intelligent systems in the forestry industry.
Keywords: reinforcement learning, deep learning, cutting areas location, forestry, artificial intelligence, planning optimization, clear-cutting
The article proposes an image recognition algorithm for an automated fiberglass defect inspection system using machine learning methods. Various types of neural network architectures are considered, such as models with a firing rate of neurons, a Hopfield network, a restricted Boltzmann machine, and convolutional neural networks. A convolutional neural network of the ResNet model was chosen for developing the algorithm. To develop the algorithm, a convolutional neural network of the ResNet model was selected. As a result of testing the program, the neural network worked correctly with a high learning percentage.
Keywords: fiberglass, defects, machine learning, convolutional neural networks, ResNet architecture, testing, accuracy
The article provides a rationale for the hypothesis about the possibility of influencing changes in the destructive ability of genetic algorithm (GA) operators on the trajectory of population movement in the solution space directly during the operation of the evolutionary procedure for labor-intensive tasks. To solve this problem, it is proposed to use a control superstructure from an artificial neural network (ANN) or the "random forest" algorithm. The hypothesis is confirmed based on the results of computational experiments. This study presents the results obtained with calculations on CPU and CPU + GPGPU in a resource-intensive task of synthesizing dynamic simulation models of business processes using the mathematical apparatus of the Petri net theory (PN), and a comparison with the operation of GA without a control superstructure, GA and a control superstructure based on ANN of the RNN class, GA and the "random forest" algorithm. To simulate the operation of GA, ANN, the "random forest" algorithm, business process models, it is proposed to use a graph representation using various extensions of PN, examples of modeling the selected methods using the proposed mathematical apparatus are given. For the operation of the ANN and the random forest algorithm for recognizing the state of the GA population, a number of rules are proposed that allow the management of the solution synthesis process. Based on the computational experiments and their analysis, the strengths and weaknesses of using the proposed machine learning algorithms as a control superstructure are shown. The proposed hypothesis was confirmed based on the results of computational experiments.
Keywords: "Petri net, decision tree, random forest, machine learning, Petri net theory, bipartite directed graph, intelligent systems, evolutionary algorithms, decision support systems, mathematical modeling, graph theory, simulation modeling
When measuring the intensity of light, the size of the light source should be small compared to the photometric distance. In this case, the law of squares of distance is fulfilled, which can be applied in practice and obtain high measurement accuracy if the photometry distance exceeds the largest size of the light source by at least 10 times. For light sources with finite dimensions at small distances to the illuminated surface, this law must be amended. This paper presents the results of calculations of errors when using the law of squares of distance for light sources of finite sizes of various shapes and various light distributions.
Keywords: the law of squares of distance, luminous intensity, measurement error, photometry distance.
The article discusses the process of formation of cavitation bubbles. One of the effective designs of regulating equipment is an axial type valve, for which a mathematical description of the process of formation of cavitation bubbles has been developed. This description allows you to evaluate the bubble structure depending on the main operating and design dimensions of the valve.
Keywords: valve, cavitation bubbles, probability, functions
A transient non-linear coupled heat transfer problem with heat conductivity properties being dependent on moisture content has been studied for heterogeneous domain represented by a building exterior structure. Moisture content is considered as a variable parameter linked to material permeability and vapour transfer characteristics as well as amount of moist being condensed and evaporated. The solution to such problem has been obtained by using semi-analytical approach where the governing equations have been discretized by using finite element technique at the spatial domain, and analytically at temporal domain. Picard iteration method has been used for equation linearization. Based on a connection detail of three-layered exterior wall with floor slab, a sample problem has been solved. The results are compared with the results of similar but linear problem.
Keywords: non-linear transient problem, semi-analytical method, heat transfer, evaporation, condensation
The paper proposes an approach to improve the efficiency of machine learning models used in monitoring tasks using metric spaces. To solve this problem, a method is proposed for assessing the quality of monitoring systems based on interval estimates of the response zones to a possible incident. This approach extends the classical metrics for evaluating machine learning models to take into account the specific requirements of monitoring tasks. The calculation of interval boundaries is based on probabilities derived from a classifier trained on historical data to detect dangerous states of the system. By combining the probability of an incident with the normalized distance to incidents in the training sample, it is possible to simultaneously improve all the considered quality metrics for monitoring - accuracy, completeness, and timeliness. One approach to improving results is to use the scalar product of the normalized components of the metric space and their importance as features in a machine learning model. The permutation feature importance method is used for this purpose, which does not depend on the chosen machine learning algorithm. Numerical experiments have shown that using distances in a metric space of incident points from the training sample can improve the early detection of dangerous situations by up to two times. This proposed approach is versatile and can be applied to various classification algorithms and distance calculation methods.
Keywords: monitoring, machine learning, state classification, incident prediction, lead time, anomaly detection
The article describes the mathematical foundations of time-frequency analysis of signals using the algorithms Empirical Mode Decomposition (EMD), Intrinsic Time-Scale Decomposition (ITD) and Variational Mode Decomposition (VMD). Synthetic and real signals distorted by additive white Gaussian noise with different signal-to-noise ratio are considered. A comprehensive comparison of the EMD, ITD and VMD algorithms has been performed. The possibility of using these algorithms in the tasks of signal denoising and spectral analysis is investigated. The estimation of algorithm execution time and calculation stability is performed.
Keywords: time-frequency analysis, denoising, decomposition, mode, Hilbert-Huang transformation, Empirical Mode Decomposition, Intrinsic Time-Scale Decomposition, Variational Mode Decomposition
The article discusses the problems of wear of the feeding machine rollers associated with speed mismatch in the material tracking mode. Existing methods of dealing with wear and tear struggle with the effect of the problem not the cause. One of the ways to reduce the intensity of wear of roller barrels is to develop a method of controlling the speed of the feeding machin, which reduces the mismatch between the speeds of rollers and rolled products without violating the known technological requirements for creating pulling and braking forces. Disclosed is an algorithm for calculating speed adjustment based on metal tension which compensates for roller wear and reduces friction force. Modeling of the system with the developed algorithm showed the elimination of speed mismatch during material tracking and therefore it will reduce the intensity of roller wear.
Keywords: speed correction system, feeding machine, roller wear, metal tension, control system, speed mismatch, friction force reduction
PHP Data Objects (PDOs) represent a significant advancement in PHP application development by providing a universal approach to interacting with database management systems (DBMSs). This article opens with an introduction describing the need for PDOs as of PHP 5.1, which allows PHP developers to interact with different databases through a single interface, minimising the effort involved in portability and code maintenance. It discusses how PDO can improve security by supporting prepared queries, which is a defence against SQL injection. The main part of the paper analyses the key advantages of PDO, such as its versatility in connecting to multiple databases (e.g. MySQL, PostgreSQL, SQLite), the ability to use prepared queries to enhance security, improved error handling through exceptions, transactional support for data integrity, and the ease of learning the PDO API even for beginners. Practical examples are provided, including preparing and executing SQL queries, setting attributes via the setAttribute method, and performing operations in transactions, emphasising the flexibility and robustness of PDO. In addition, the paper discusses best practices for using PDO in complex and high-volume projects, such as using prepared queries for bulk data insertion, query optimisation and stream processing for efficient handling of large amounts of data. The conclusion section characterises PDO as the preferred tool for modern web applications, offering a combination of security, performance and code quality enhancement. The authors also suggest directions for future research regarding security test automation and the impact of different data models on application performance.
Keywords: PHP, PDO, databases, DBMS, security, prepared queries, transactions, programming
The paper describes the programs developed by the author, designed for automatic control of calculations carried out in the Gaussian package, as well as processing of the obtained results. Gaussian is a powerful quantum chemical program that allows solving a wide variety of problems related to the study of chemical compounds. However, this program has its own features, which in some problems create the need for multiple restarts of the calculation. In addition, in some cases, additional calculations may be required. For example, if we are talking about studying a reaction, and not a separate molecule, the characteristics of the reaction must be calculated using the results of calculations of the molecules involved in it. By automating the research process and performing calculations that are not directly available in Gaussian, shell programs can significantly save the user's time and relieve him of routine work. The approaches described are especially relevant when it comes to studying a whole set of molecules (for example, when studying reactions within a certain class of chemical compounds). Thus, provided there are no emergency situations, the use of a shell program eliminates the need for user intervention at all stages of the study that occur between the creation of initial tasks and obtaining the final results.
Keywords: mathematical modeling of chemical reactions, quantum chemical calculation, quantum chemical methods, software, Gaussian, shell program, automation of research
The article presents the main stages and recommendations for the development of an information and analytical system (IAS) based on geographic information systems (GIS) in the field of rational management of forest resources, providing for the processing, storage and presentation of information on forest wood resources, as well as a description of some specific examples of the implementation of its individual components and digital technologies. The following stages of IAS development are considered: the stage of collecting and structuring data on forest wood resources; the stage of justifying the type of software implementation of the IAS; the stage of equipment selection; the stage of developing a data analysis and processing unit; the stage of developing the architecture of interaction of IAS blocks; the stage of developing the IAS application interface; the stage of testing the IAS. It is proposed to implement the interaction between the client and server parts based on Asynchronous JavaScript and XML (AJAX) technology. It is recommended to use the open source Leaflet libraries for visualization of geodata. To store large amounts of data on the server, it is proposed to use the SQLite database management system. The proposed approaches can find application in the creation of an IAS for the formation of management decisions in the field of rational management of forest wood resources.
Keywords: geographic information systems, forest resources, methodology, web application, AJAX technology, SQLite, Leaflet, information processing