MOOG D1 series controller security dog D136E001-001, D136-001-008a. MOOG D1 series controller security dog D136E001-001, D136-001-008a. Plan 1: Storage of the same type of numbers.
Generally speaking, this plan can handle most data storage issues. Moreover, the hot and cold storage use the same storage structure. When data is moved from the hot storage to the cold storage, no data conversion is required. Also, the code changes are relatively small. Just like database partitioning, before implementing this plan, we need to consider the following issues:

How to determine the hot and cold data;
How to trigger the separation of hot and cold data;
How to separate the cold and hot data?
How to utilize the cold and hot data.
Now, let’s elaborate on these four aspects.

3.2.1.1 How to Determine the Hotness and Coldness of Data
The common method of discrimination is to make the judgment based on one or several fields in the main table. For example, in the work order system, the work order status and the exact operation time of the customer service can be used as the discrimination conditions for cold and hot data. The work orders that have been closed and have been ongoing for more than one month can be regarded as cold data, while the other work orders can be regarded as hot data.
When analyzing the cold and hot data, we should follow the following principles:

Once the data is moved to the cold storage, it means that the transaction code can only perform query operations on it.
The hot and cold data cannot be read together.
3.2.1.2 How does the separation of hot and cold data trigger?
There are three methods to trigger the separation of cold and hot data: adding the code for triggering the separation after the modification operation code, monitoring the database change log, and conducting regular scans of the database. We will explain each of these methods one by one.

Add the code for triggering the separation of hot and cold to the end of the grading operation code.
After each data review is completed, the code for implementing the cold-hot separation will be triggered. This method is relatively simple. Only the judgment of whether it has become cold data is required each time. Although it can ensure the real-time nature of the data, it cannot differentiate cold and hot data based on the date and time. Moreover, all the code related to data modification must include the cold-hot separation code. Therefore, this method is rarely used and is generally employed in small systems.
Monitor the database change log
This approach requires the creation of a new service to monitor the database change logs. Once it detects that the relevant tables have been modified, it triggers the cold-hot separation logic. This approach is divided into two sub-methods. One is to directly trigger the cold-hot separation logic, and the other is to send the updated data of the tables to the cluster (it could be a custom public List or MQ). The subscription then retrieves the data from the cluster and implements the cold-hot separation logic. The advantage of this approach is complete decoupling from the transaction code, low latency. However, like method one, it also has the same drawback of not being able to differentiate cold and hot data based on the date. Moreover, it may cause a problem where the transaction code and the cold-hot separation logic code operate on the same piece of data simultaneously, which is a concurrency issue.
Timely scan the database
This approach involves creating a new service that scans the database on a regular basis. Usually, we would accomplish this through a task scheduling platform or by using third-party open-source libraries/components. Of course, if you prefer, you can also implement it by writing an operating system timer task. The advantage of this method is that it separates from the business code and allows for differentiating hot and cold data based on the date and time. However, it does not offer real-time functionality.
Based on the descriptions of the above three methods, the work order system is suitable for using the timely scanning of the database approach to achieve the separation of hot and cold.

3.2.1.3 How to separate hot and cold data
Now we have a separate processing plan for hot and cold data. So in this section, let’s take a look at how to achieve the separation of hot and cold data.
The underlying process for ending the separation of hot and cold is as follows:

Determine the temperature of the data (whether it is hot or cold);
Put the cold data into the cold storage.
Remove the cold data from the hot storage.
To complete these three foundational processes, we need to consider the following points:
In the first three processes, we cannot guarantee 100% that there will be no problems. Therefore, we need to ensure the consistency of the data through the code. To achieve ultimate consistency, we can add a new column “Is Cold Data” (Yes, No, default: No) to the work order table. Firstly, the cold-hot data separation service will mark all the cold data found as cold data. Then, the service will move the cold data to the cold storage, and after the migration is completed, it will delete the corresponding data from the hot storage. If an anomaly occurs during the migration or deletion of data, we need to add a retry mechanism to the transaction code for migration and deletion (here, a mainstream retry library such as Polly in .NET or guava-retry in Java can be used). If after multiple retries it still fails, the code can interrupt the implementation of cold-hot data separation and issue a warning, or skip the unsuccessful data and continue to implement the subsequent data migration. In the case of unsuccessful deletion and skipping, there is a high possibility that duplicate data will be inserted into the cold storage during the next implementation of cold-hot data separation. Then, we need to check whether the data exists in the cold storage before inserting it, or use the idempotent operation of the database to complete the insertion operation (such as the Insert …On Duplicate Key Update statement in MySQL database).
Here, let’s consider a problem. The data volume of the work order system is huge. If all the cold data were inserted into the cold storage at once, it would be very slow. It might take several minutes or even several hours. So there are two solutions to this problem: one is batch processing, and the other is multi-thread processing.

Tip: What is idempotence? Identical requests/operations, when executed multiple times, will yield the same result as when executed just once.

Let’s first talk about the batch processing method. For instance, in our work order system, there are 10 million pieces of cold data. Then we can handle the separation of hot and cold data according to the following process:

Retrieve the first 10,000 pieces of cold data;
Store these 10,000 cold data in a cold storage facility.
Remove these 10,000 cold data from the heat storage area.
Loop from 1 to 3, until the cold data migration is completed.
Let’s talk about the methods for handling threads. There are two approaches for multi-thread processing. One is to set up multiple different timers, and each timer will initiate a thread to process the data at an estimated time interval. The other is to use a thread pool. First, calculate the total amount of cold data that needs to be moved, and then determine the number of threads required based on the maximum data transfer capacity of each thread. If the required number of threads exceeds the number of threads in the thread pool, then activate all the threads in the pool (not all threads mean higher performance). The fundamental principles of these two methods are the same, and the same issues that need attention are also the same.
How can we avoid multiple threads moving the same cold data during data migration? We can use locks. Add a “Locked Thread ID” field to the work order table to identify the thread that is currently processing the data. Each time a thread obtains the data, it needs to write its own thread ID into the Locked Thread ID field of the obtained data. After writing the thread ID, it cannot directly start the data migration. Instead, it needs to query its own acknowledged data again before starting the data migration. This is to prevent other threads from prematurely writing data into the Locked Thread ID field before it is written, thus avoiding the problem of multiple threads processing the same piece of data. After the second query, we can start the data migration. However, be careful that the data used for the data migration is the data obtained after the second query, not the data obtained by the thread at the beginning.
Here comes another issue. Suppose a certain thread crashes, there is a high possibility that the lock has not been released (the cold data in the work order table has not been deleted). How should we handle this? Actually, it’s quite simple. Add a lock hold time column to the work order table to record the acknowledged time, and set that when the lock hold time exceeds N minutes (for example, 5 minutes, the value of N needs to be averaged after multiple tests in the testing environment) it can be re-acknowledged by other threads.
Of course, this brings up another issue. Suppose a certain thread is not hung, but the time for processing the data has indeed exceeded the limit. Other threads only know that the data transmission has timed out. What should we do? We can use the idempotent operations of the database mentioned in the previous section to complete the insertion operation.

3.2.1.4 How to Utilize Cold and Hot Data
This issue is also quite simple to handle. We can separate cold data queries and hot data queries into two operations. By default, only hot data can be queried. When it is necessary to query cold data, an identifier needs to be passed to the server to inform that cold data is required for the query.

TIP: Be sure not to conduct a simultaneous query of hot and cold data.

3.2.2 Plan Two: NoSQL Storage
The previous part discussed the cold-hot storage of the same type of databases. The principle of using NoSQL storage is the same, except that the cold storage has been changed from a relational database to a NoSQL database. The processes and precautions are also the same. However, the advantage of using NoSQL storage for cold storage is that regardless of the size of the data volume, as long as it is within the acceptance range of NoSQL, the query speed will be faster than that of a relational database as the cold storage. Because our cold storage data is still quite large. Currently, most popular NoSQL databases on the market are suitable for use as cold storage. In practical projects, the choice of which NoSQL to use as the cold storage needs to be based on the technical level of the development team, project requirements, and operation and maintenance costs, etc.

IV. Summary
After discussing zoning and temperature separation, these two plans are applicable only when there are clear zoning divisions or when there are fields that can identify cold and hot data. This plan covers most of the project requirements, but there are still some projects whose needs do not fit these two plans. In my subsequent articles, I will continue to explain. Remembering a city often only requires a landmark building. They may be rooted in the depths of history or project the characteristics of the era. With their unique charm form, they reflect the spirit and civilization of this city.

 

Looking at the southwestern region, there are numerous impressive “landmarks”. For instance, the “Sun God’s Bird”, which embodies Chinese sentiments and leads us to soar over the towering mountains of Sichuan, is the Chengdu Tianfu International Airport. When it comes to Chongqing, what other distinctive and magnificent buildings come to your mind? Standing at the confluence of the Yangtze River and the Jialing River, the Lufu Building Complex looks like a “giant ship” with sails unfurled, symbolizing a sailing voyage. This creative idea originated from Chongqing’s rich and long-standing shipping culture, transformed into strong sails on the river surface, and寓意着”set sail on a journey”. In the opinion of the project designer Moshe Safdie, architectural design is not just about sudden inspiration, but requires continuous exploration and improvement, constantly refining the design prototype until it is completed. This viewpoint is perfectly and comprehensively reflected in the Lufu Square project. Eight towers rise up, two of which exceed 350 meters, and there is a 300-meter-long crystal corridor, which is widely known as the “horizontal skyscraper”. Looking down through the transparent glass, you can enjoy a 270-degree panoramic view of the mountain city scenery.

 

 

Entering the interior of the square, the innovative traffic guidance system and transportation hub integrate the subway station, bus transfer station, and port terminal, and set up corridors and entrances and exits on different floors to facilitate people’s movement between offices, hotels, apartments, and entertainment facilities. In addition, the corridors are equipped with sky gardens, swimming pools, restaurants, and other facilities, as well as a glass-bottomed observatory for viewing the sky. The design structure and shading system can withstand the local climate and meet people’s entertainment and tourism needs from multiple aspects. It is worth noting that this project, which has been praised by the media as a miracle, has received the LEED-CS Gold Pre-certification awarded by the American Green Building Council. In terms of sustainable development, this dynamic and vibrant “future building” has made adequate preparations. Through advanced technologies such as regional heating, heat source recovery, efficient lighting, daylight sensors, and rainwater collection, it reduces the impact on the environment. And ABB electrical products, such as circuit breakers, surge protectors, and contactors, contribute to the building group’s protection against lightning strikes and surges, playing a significant role in ensuring stable and reliable power supply. Among them, the Emax2 air circuit breaker in the power distribution room ensures the continuity and reliability of the building’s power supply while significantly reducing the cost of digital and green upgrades of the building; the Tmax XT series plastic housing circuit breakers not only have high electrical performance parameters but also provide a Touch trip unit compatible with the same platform as the ACB, supporting plug-and-play modules and multiple communication protocols, providing a reliable platform for future intelligent transformation, online upgrades, and equipment predictive maintenance implementation. Moreover, the OVR surge protector in the distribution box makes the power system safer and more stable; the S200 series miniature circuit breakers help the building avoid faults caused by unreliable electrical connections. Now, Fulex Square has already become a landmark symbol of Chongqing City, presenting the magnificent气势 of the ancient Yu Pass while also continuing the beautiful hope of Chongqing people setting out from here and moving towards the future. It is like Chongqing sending a message to the world: “We are here!”

 

Exploring world-renowned buildings, ABB has never stopped. Welcome to watch the “Architecture is Solid Music” series of short films, where you can learn more about architectural stories and travel around the world with ABB. Machine vision technology is an interdisciplinary field involving artificial intelligence, neurobiology, psychophysics, computer science, image processing, pattern recognition, and many other fields. Machine vision mainly uses computers to simulate human visual functions, extract information from the images of objective things, process and understand it, and ultimately be used for actual detection, measurement, and control. What is an industrial camera? An industrial camera is a key component in the machine vision system. Its essential function is to convert light signals into orderly electrical signals, which is equivalent to the “eyes” of the machine vision system. Compared with traditional civilian cameras (cameras), industrial cameras (cameras) have high image stability, high transmission capacity, and high anti-interference ability. Most industrial cameras on the market are cameras based on CCD (Charge Coupled Device) or CMOS (Complementary Metal Oxide Semiconductor) chips. CCD, a charge-coupled device image sensor. It is made of a high-sensitivity semiconductor material and can convert light into charges, which are converted into digital signals through an analog-to-digital converter chip. The digital signals are compressed and saved by the flash memory or built-in hard disk card inside the camera, thus making it easy to transfer data to the computer and modify the image as needed and as desired through the processing methods of the computer.

CMOS, Complementary Metal Oxide Semiconductor