The desire to increase the security level of guarded objects has lead to an increase in the number of cameras within video surveillance systems. However, cameras working within the same system are often working independently. Each camera provides its own information, analytics, and statistics, which are not connected with data from other cameras. Such a situation does not allow one to see the big picture.
Video analysis is a vital tool in systems with a large number of cameras. Tracking an object’s movements is one the most promising features in video surveillance. Let’s consider the algorithms of pathway construction on one camera and the technology with several connected cameras within a video system.
Tracking based on a single camera
Tracking modules often cannot work without a motion detector. Sequential analysis is performed for each frame where there are moving objects, in order to build a pathway. Theoretically, a few moving objects may appear within the same frame. That is why a software program should not only be able to build a pathway of objects, but to distinguish between these objects and their movements as well.
Tracking based on two cameras
The simplest version of tracking scrutinizes two frames and builds a pathway. The first step is to detect movement within the current and preceding frame. The next step is to analyze the speed and direction of the object’s movement, as well as their size, and calculate the probability of the object’s transition from one point on the pathway in the preceding frame to another point in the current frame. The most probable moves construct the object’s pathway.
Multiple frames tracking
Objects can move on a screen in different ways. Their pathways can intersect, and they can disappear and re-appear again. For example, if a camera monitoring a highway has one car on its screen, it may overlap with another one and then comes out from behind it again. Some objects may overlap each other or abruptly change direction. In these cases the task of building an exact pathway becomes complicated. A method of using two cameras for monitoring is not suitable for such a situation. It cannot provide a high degree of accuracy. Multiple frame tracking is used in such cases. It analyzes a sequence of frames and performs a continuous post-processing of results.
The software package builds graphs and analyzes the transition of objects from one state to another. Speed and direction of movement, position and color characteristics are analyzed in order to understand what object corresponds to a particular movement. This analysis results in a set of the most probable movement of an object which forms its pathway. The difference between the two methods lies in the fact that the sequence of frame processing is done not only for the current position of an object but for its transitional history as well. It helps to improve accuracy tremendously in a complex situation with motion intersection and the disappearances and re-appearances of an object.
The above-mentioned algorithms work well with scenes in which objects move apart from one another. But for high-density movement tracking these methods are not applicable.
Correlation methods are used for the analysis of high-density movement tracking. An operator should specify an area of a screen which will be tracked. The software program will search this area in subsequent frames and build a pathway after that.
Any moving object may be part of a tracking area. Tracking is triggered either by a motion detector or can be found as the object of a certain type specified by the software. The software builds a color histogram of a selected area and points out special points like specific angles or distances. Their sequence is in the following frames.
The main disadvantage of correlation method is that it requires a lot of resources. The reason for that is because initial analysis of the sample in question such as the selection of colors, histogram building, or set-up of special points requires ten or even a hundred times more computing power than resources required by a motion detector method. Additionally, the correlation method builds a pathway of specified objects only. The methods of tracking with two or more frames build pathways of all moving objects providing a path search tool for an object or for its pathway. The correlation method cannot be used for scenes with a high concentration of traffic.
Inter-camera tracking or the tracking of an object based on multiple cameras within a video system can be realized in two different ways or methods.
This involves the installation of synchronized video cameras which monitor related areas. The object simply passes out of one camera view into the field of another camera view. The software program detects this transition, picks it up, and builds its pathway. The calibration of cameras within the system is very important for this method’s results to be precise. It is also important that similar equipment is used, because a object should be identical when it is passing from one frame to another. As the transition of an object grows longer, it means that more moving objects can get onto a screen. The probability of an error when a greater number of cameras are involved is much higher.
This method is based on an interactive features search and does not require special equipment. An operator should identify the existing cameras within a software program’s plan, set-up an average time of transition from one camera to another, select a sample of an object whose pathway will be built from an archive. The object’s parameters, such as proportions, size, and colors can also be specified. The software program will display all objects similar to the search criteria. An operator should select a particular one. After that, a selected sample will be searched for on all other cameras. The software analyzes an area’s map, then determines when a selected object might have been captured by a particular camera, and provides relevant results in a set of tracks, which are grouped by the object’s frames from the same IP-channel. The grouping of frames is done based on the object’s continuous movement within the camera's viewpoint. An operator selects the desired object from a presented set of results and triggers the further search. Thus, such a search is performed in stages. It can be done until a selected object does not disappear from all cameras’ viewing areas or until an end-user has found what he was searching for. The pathway building can be stopped at any stage.
This method requires more of an operator’s time and attention, but it provides a higher degree of accurate results. The advantage of this method is that cameras within a video system needn’t necessarily be connected and calibrated. Step-by-step pathway building and features search of algorithms can be done independently.
Pathway transferring from a video to a plan
The multi-camera video analysis topic is very interesting to developers. Its presents a great potential for technological improvement. For example, modern technologies allow choosing a plan with a certain pathway from one camera to another, as well as building a pathway for a moving object within the same camera viewpoint, and the transfer of transitions from one point to another from a video to a plan.
Generally speaking, such types of video analysis can be done not only for movement tracking, but for building heat maps of intensive traffic, visitor counting, securing an entrance zone, or timing the stay within a zone as well.
The task of pathway transition from a video to a plan is rather complicated for several reasons.
1. The task of connecting a surveillance area with its plan presents a major difficulty in itself. The program has to calculate an object’s coordinates and its movement within one space and transfer them to a plan which is in a different space. It is no trivial task to re-calculate all these frames one by one. It is not enough to simply measure the distance and boundary points and transfer them to a plan. The plan is to view an object from above, while cameras can usually be placed randomly. This task can be solved by organizing a stereo system. At least two cameras are installed in different positions. They monitor the same area. In this case, software package will be able to build a 3D-model of a space and pinpoint the movements of an object exactly.
2. Distortions. As in any optical system, a camera can present distortions which may lead to errors to pathway building in a plan.
3. Linking camera’s viewpoint to a plan. It can be linked manually by an operator, but is rather inconvenient, especially within large video systems. You must create an algorithm which will enable you to calculate an area independently based on supporting objects indicated in a plan and falling within a camera’s viewpoint.
4. The separation of objects from one another. If there is a difference in the position of two objects in only one coordinate, and it is negligible in another two (for example, two objects are moving parallel to each other, nearer and farther away), then these objects are the same from a camera’s viewpoint. They are combined into one. Their pathways merge. But, in fact, they have different pathways. Moving objects in a plan (top view) should be separate from each other. That is why one point on a screen may be a different point in a plan. This necessitates accurate algorithms for grouping pixels into object and objects’ separation for overlapping objects. This problem can partially be solved by using stereo vision.
Tracking methods accuracy
The development potential for multi-camera video analysis and motion tracking technology is still very high. Existing tracking algorithms for one or many cameras within a system differ greatly in their approach to solutions and the accuracy of their results.