Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Improved Object Localization Using Accurate Distance Estimation in Wireless Multimedia Sensor Networks

  • Yasar Abbas Ur Rehman,

    Affiliations NUSys Lab/ National University of Computer and Emerging Sciences (NUCES), Peshawar, KPK, Pakistan, Department of Electrical Engineering/City University of Science and Information Technology, Peshawar, KPK, Pakistan

  • Muhammad Tariq ,

    tariq.khan@nu.edu.pk

    Affiliation NUSys Lab/ National University of Computer and Emerging Sciences (NUCES), Peshawar, KPK, Pakistan

  • Omar Usman Khan

    Affiliation NUSys Lab/ National University of Computer and Emerging Sciences (NUCES), Peshawar, KPK, Pakistan

Improved Object Localization Using Accurate Distance Estimation in Wireless Multimedia Sensor Networks

  • Yasar Abbas Ur Rehman, 
  • Muhammad Tariq, 
  • Omar Usman Khan
PLOS
x

Abstract

Object localization plays a key role in many popular applications of Wireless Multimedia Sensor Networks (WMSN) and as a result, it has acquired a significant status for the research community. A significant body of research performs this task without considering node orientation, object geometry and environmental variations. As a result, the localized object does not reflect the real world scenarios. In this paper, a novel object localization scheme for WMSN has been proposed that utilizes range free localization, computer vision, and principle component analysis based algorithms. The proposed approach provides the best possible approximation of distance between a wmsn sink and an object, and the orientation of the object using image based information. Simulation results report 99% efficiency and an error ratio of 0.01 (around 1 ft) when compared to other popular techniques.

Introduction

The accessibility of low cost and low complexity multimedia hardware like cameras and microphones has allowed for the transformation of existing Wireless Sensor Networks (WSN) to Wireless Multimedia Sensor Networks (WMSN). Although the variability in services provided by recent WMSN have surpassed traditional WSN, there are certain constraints which are imposed by the properties inherited from traditional WSN [13]. These constraints include limited processing capabilities, battery power, bandwidth, and on-board memory. Since, both WSN and WMSN involve battery powered devices, the need for energy conservation to prolong sensor lifetime is an important design parameter when modeling and designing protocols, algorithms and services for these networks. One important technique that is used in designing these services is the localization of individual sensor nodes or objects within the network [4]. Amongst the various node localization techniques, range free methods have gained wide-spread attention due to their ability to localize devices with an acceptible error tolerance without the need for any specialized hardware component [5, 6]. Object localization involves estimation of an object’s position within a network as it traverses a path in-between individual WSN nodes [7].

A key problem in WMSN is the isolation of objects from image backgrounds [8]. Most approaches assume static nodes and thus rely on simple background subtraction based methods [912]. Another approach takes into account fusion of information from multiple WMSN nodes to compensate for localization error [13]. In real-time, however, the assumption of static nodes can lead to false results due to uncontrollable environmental variations. Another source for false detection can be attributed to lack of consideration to the geometry and orientation of an object. This false information is then eventually conveyed to sink nodes. The orientation of a node with respect to an object is mostly unpredictable and involves the fusion of disparate WMSN nodes [14].

The central theme of this paper is to effectively isolate and locate objects from images by fusing range free techniques with machine learning and computer vision algorithms. The objective would be to increase object localization accuracy in the real-world from images received at the sink node by utilizing coordinate information of sensor nodes using Principle Component Analysis (PCA) and computer vision algorithms. The energy consumption footprint of this method will also be minimized in order to prolong the WMSN lifetime. The proposed method will be analyzed and compared to other existing techniques.

The rest of this paper is organized as follows: The related work discusses the state of the art work done in range free localization techniques and object localization techniques. The methadology section defines the proposed methodology of object localization in a localized WMSN. The results are discussed and analyzed in simulation results section. The paper is then concluded with a direction to the future work.

Related Work

Since WMSN essentially involve WSN with multimedia equipped devices [15], it can therefore be assumed that they can inherit traditional range free based localization algorithms. This assumption inspires us to review some of related state of the art work in the area.

The basic aim of range free localization algorithms is to locate individual sensor nodes by utilizing existing resources without depending on additional hardware infrastructure [16]. This localization process is completed in three steps; determining the relative distances between individual nodes, approximating position of nodes by solving a set of linear equations simultaneously, and finally, refining the position by utilizing position information from neighboring nodes [17]. For efficiency, the localizing scheme must be robust and energy efficient in areas of low sensor density, or when obstacles are present between sensing nodes. Likewise, the scheme must also hold against un-determinant nodes [1820]. However, since WSN deployments are usually random, therefore anisotropic patterns and holes can pose a challenge. For these deployment related constraints, detour path angular information (DPAI) based localization can be used [21].

For object localization, computer vision based algorithms are predominantly used for the purpose of detection, recognition, and tracking [2225]. As can be anticipated, vision based algorithms in a distributed setup would not only involve processing overhead, but also constrain image transmission on limited bandwidths. As such, data compression would be natural [26]. The processing overhead would be associated with algorithmic complexity. For tracking based applications, continuous capture of a target would entail a significant energy footprint. Various solutions exist to minimize this footprint; ranging from rotational camera sensors [27] to time-stamped varying information capture using single static cameras [28]. Another approach to reduce transmission latency and conserve energy is the adoption of cluster based approach for communication with the aid of Kalman filters [29]. As such, each cluster will track objects using cooperative communication between cluster elements in order to aggregate data.

The real-world coordinates of objects can be estimated using image based coordinates using intrinsic and extrinsic properties of cameras [28]. However, numerous difficulties arise in this transformation process, for instance, the disturbance of camera positions due to strong winds, the presence of moving artefacts, or shadowing effects. It is, therefore, not sufficient to send image based information to sink node’s but object meta-data from the image must also be conveyed [30]. To extract this information, a straight-forward approach based on frame differencing can be possible [912] assuming static nodes. However, in the scenarios just mentioned, this method will be less efficient. To cope with this problem, this paper proposes to localize objects in WMSN using image information received from different nodes at the sink node. The orientation of the object will then be carried out with respective to the sink. The proposed methodology utilizes a fusion of range free localization, PCA, and computer vision based algorithms to accurately localize a target object while respecting energy constraints of the WMSN nodes.

Methodology

Consider a heterogeneous WMSN network with m multimedia and s sensor nodes deployed randomly in a field. To localize an object o traversing a path inbetween the nodes, it is important that the other WMSN nodes, including the sink node are also aware of their positions. The WMSN node positions are localized using the DPAI procedure [21] by obtaining location information of anchor nodes in the network. Once the nodes are localized, the sink node floods its unique identity to all nodes in the network. Upon receipt of an identity packet, WMSN multimedia nodes will start the object localization process. This process is illustrated in Algorithm 1.

Algorithm 1: In-Node Process

Input: V = {v1vn}: Set of WMSN nodes with unknown location

    Dxd,yd: Sink node location

    Fi: Frame captured by vi node

Output: VX,Y = {v1x,y, v2x,y, …, vnx,y}: Localized nodes with location information

    LL1i: Low-Low level 1 sub-band corresponding to frame i

Node Localization localize()

  for i = 1 → n do

1   calculate(x, y) ∈ vi; Using DPAI [21]

Frame Capture & Transmit captureTransmit()

1   captureFi;

2   extract(LLIi, SubBand);

3   packetize{LLI, v1x,y};

4   transmit{LLI, vix,y};

An example scenario is depicted in Fig 1, where a multimedia node with location (Xn, Yn) captures an image of the scene containing a target object. This image is then decomposed into four multi-resolution images using 2D Discrete Wavelet Transform (2D-DWT). Of these decomposed images, only the coarse level coefficient image, i.e., the Low-Low level 1 (LL1) sub-band is transmitted to the sink using a multi-hop route. The selection of the LL1 sub-band has multiple advantages as it entails minimum processing, storage, and transmission energy. The small size is thus ideal when considering the limited bandwidth properties of the WMSN. The sink-node, upon receipt of the LL1 sub-band image, performs post-processing using computer vision algorithms aided by PCA technique. This process extracts an object from the received image, and is shown in Algorithm 2.

Algorithm 2: Object Localization at Sink Node

Input: VX,Y = {v1x,yvnx,y}: Localized nodes along with location information

    LL1i: Low-Low level 1 sub-band corresponding to node i

Output: dmin(i): Minimum distance to object from vi node

     CD,O: Distance from DXd,Yd to object Oi

     βT: Angle between object and sink node

Select Object Close to Node vi selectObject()

  for i = 1 → n do

1    extract Oi(LL1i);

   if Oi = = 1 then

2    vi(x,y) = vi(Xs,Ys);

3   Calculate BD,N;

4   Calculate Omax(i);

Compute Dmin from vi to Oi computeDMin()

1    Calculate P = Aμ;

2    Calculate U = PTEGV1, …, 20;

3    Calculate Di = normFiU;

4    Calculate Dmini = {Di};

5    AO,N = Dmini;

Compute Orientation & Distance from Object to Destination

DestinationOrientation()

1   Calculate ;

   if CD,O > BD,N then

2     ;

   else

    if CD,O < BD,N then

3      ;

    else

     if CD,O = = BD,N then

4       ;

To obtain location information of the object, it is necessary to estimate it’s distance from the WMSN node. This distance is obtained by preparing the information set 𝕽 from the received image, given as: (1) where, oi is the size of the ith object, and hi is its respective height from the baseline of the image, received from the ith WMSN node camera. Only one object per image will be considered. As such, the object closest to the WMSN node camera will be preferred. This object will either have maximum size or minimum height. An example is illustrated in Fig 2, where an image received from node i contains two objects oi and . After prioritizing the objects based on their size and height, oi is ultimately selected.

thumbnail
Fig 2. Object oi has maximum size and minimum height hi as compared to object oi with height hi.

https://doi.org/10.1371/journal.pone.0141558.g002

Before discussing various cases, it is important to describe the indexing process of the various parameters used for prioritizing the objects. The object sizes are arranged such that the object with smallest index i is the object having maximum area with respect to other objects in the same image. The object height index i′ are arranged such as the smallest index i′correspond to the minimum height of the object from the base line. In the present case the objects with minimum heights are given priority however, the addition of object size parameter reliably select the objects in case of multiple objects sharing same height and size parameters. The selection process is shown in Fig 3, where three possibilities can arise in the entire process.

  1. If the index i of a maximum object size and i′ of minimum object height are the same, then the object corresponding to index i is selected.
  2. If the index i of a maximum object size and i′ of minimum object height are not the same, then the object corresponding to index i′ is selected.
  3. If there are multiple objects with same maximum object size, or with same minimum object height, then priority is assigned to the object having the least index i amongst the participating indices.

Upon selection of an appropriate object from a received image, the distance between the object and the WMSN node is then estimated. A referential frame is designed for this purpose, as shown in Fig 4(a). The sink at location D(xd, yd) is treated as the origin point. The distance between the sink and the WMSN multimedia node would already have been estimated using the DPAI algorithm [21].

thumbnail
Fig 4.

(a) Referential frame for estimating location of an object. (b) Case where object distance from the sink node is greater as compared to monitoring WMSN node CD,O > BD,N. (c) Case where object distance from the sink node is smaller as compared to monitoring WMSN node CD,O < BD,N. (d) Case where object distance from the sink node is equal to the monitoring WMSN node CD,O = = BD,N.

https://doi.org/10.1371/journal.pone.0141558.g004

The distance AO,N between the object O(xi, yi) and the WMSN multimedia node N(xn, yn) is estimated from the image received at the sink node using PCA. For this, a predefined matrix M of n objects bearing size variations obtained at various distances between 1 upto 10 feet is defined: (2) where, each element Si,j represents the size of the ith object at a distance of j ft. The mean μ, variance var, and covariance cov of the vectorized form S = [S11, S12, …, Snn] of matrix M are given as: (3) (4) (5)

Then, the eigen vectors for the covariance matrix are calculated and arranged in descending order as: E = [Eg1, Eg2, …, Egn]. Of these, the first twenty larges eigen vectors are selcted. A new feature set F is then obtained as: (6)

For a given object size oi the variance P, feature vector U and distance di can be given as: (7) (8) (9)

Finally, the minimum distance from the set di is computed as: (10) where, dmini corresponds to the closest match for U in Fi in the given vectorized matrix S. Thus, dmini = AO,N is taken as the distance between source WMSN multimedia node and the object. After computing AO,N, the coordinates of object O can be obtained as: (11)

Finally, the distance CD,O between the sink node D and the object O can be computed using the distance formula: (12)

Once all the distances BD,N, AO,N and CD,O are computed, the object orientation βT is then calculated. For this purpose, the edges of triangle formed by the coordinates of WMSN multimedia node N, object O, and sink node D are analyzed. For the case where distance CD,O is equal to BD,N, triangle ΔDOP (See Fig 4d) can be used to compute the orientation βT as: (13) For cases where the two distances are un-equal, orientation βT is computed as a sum of angles β and β′ of two constituent triangles; ΔDNO and ΔDPN (See Fig 4b and 4c). These are given as: (14) (15)

Simulation Results

To analyze the performance of the proposed method with, we randomly deployed 10 WMSN sensor nodes, including the destination node in a 100 × 100 meter field as shown in Fig 5. The radio range is set to 25 meters to increase probability of maximum number of neighbors for each node. The nodes are first localized using the DPAI method [21]. After localization, every sensor node transmits an LL1 sub-band image (acquired after a 2D-DWT) of the monitored scene to the sink node. The sink node then performs a computer vision algorithm to find and extract any objects of interest. If found, these are recorded in set 𝕽 along with their size and height from image baseline. For multiple objects in the same image, a priority selection process is performed to select only one candidate object. This information is then fed to the PCA algorithm, which, based upon its predefined database, provides a distance estimate for the given size.

To further investigate the geometry of the object in the received frame, the variation in size of the isolated objects with respect to distance is studied. For a specific object, the object size varies exponentially with distance from the camera as shown in the Fig 6. For the farthest distances, the object size can be observed to be of almost 1 pixel.

To calculate the distance between the object and the sink node, the PCA algorithm is trained initially with 10 different objects. During the training phase, each object image is taken at a distance of 1 to 10 feet from the camera. From these images, the respective object size is extracted. A marix M of the objects is then constructed Eq (2). For M, a feature set F and distance di is then computed using Eqs (6)–(9). For an unknown object size, the minimum distance between the sink node and WMSN node is estimated using Eqs (11) and (12).

Distance Estimation between WMSN node and Object

Tables 1 and 2 show the computed object distance and size using PCA. This is also compared to a regression based technique and Oztarak et al. [28]. For simplicity, variation of a single object’s size with respect to a distance from 1, upto 10 feet is taken. As seen in Table 1, the frequency of error in the PCA based localization is reported as 1/10, compared to other localization techniques. This error is attributed towards uncertainty in the object’s size at the farthest distance. Therefore, PCA technique computes the same distance at 9 and 10 feet. Table 2 shows object size calculation in term of pixels of the object given that the distance of the object is in ft. In this case, the error in PCA based technique is exactly 0 as compared to other.

Figs 79 show the distance between the WMSN node and the object using PCA, Regression and Oztarak et al. [28] based localization techniques. In Fig 7, the error between the actual distance and the distance estimated by PCA technique is almost zero. Fig 10 show the error in distance estimation between Regression, PCA and Oztarak et. al. [28] based localization technique. Again, it can be observed that the error in PCA based technique is almost zero.

thumbnail
Fig 10. Error in distance estimation between regression, PCA, and Oztarak. et. al. based localization methods.

https://doi.org/10.1371/journal.pone.0141558.g010

The efficiency η of the proposed method is calculated as: (16) where T represents the total observations, and TnoError represents the number observations where an error was reported. With this, it can be observed that the percentage efficiency of calculating distance between the object and the source WMSN node is 40% using the method by Oztarak et al. [28], 59% using regression, and 99% using our PCA based approach. Fig 11 shows the complete result of the normalized object size variation with distance.

Fig 12 shows the distance estimation under noisy measurements, where it can be observed that the distance estimated using PCA based technique provides a statisfactory result as compared to Regression and Oztarak et. al. [28] based techniques. Fig 13 shows the result of object orientation with respect to sink node. Here, it can be observed that the orientation using all three localization methods is close to the actual orientation.

thumbnail
Fig 13. Object orientation βT obtained at sink node from various image sources i.

https://doi.org/10.1371/journal.pone.0141558.g013

Table 3 reports the comparison of PCA, Regression and Oztarak et. al. [28] techniques in terms of various parameters such as size and camera orientation. To calculate the efficiency of these techniques, a tolerance of 2 feet for distance estimation and 200 pixels for size estimation has been considered. The ratio ρ is given as: (17)

Table 4 shows the estimated object location by using PCA, Regression and Oztark et. al. [28] based technique. As can be depicted in Table 4 the error is reduced to 1 feet by using PCA based technique.Fig 14 shows the final result in the Network Map.

Conclusion

This manuscript presents a method for object localization in wireless multimedia sensor networks that uses range free localization, machine learning, and computer vision based techniques. Object localization is an important component of many popular applications of WMSN. As such the first step is to localize the WMSN nodes. The image acquisition process then begins where all images are processed before being delivered to the sink node. The main objective of this processing is to reduce network bandwidth, and is performed by application of a 2D Discrete Wavelet Transform 2D Discrete Wavelet Transform composes the image into various sub-bands of varying sizes and quality. The sink employs a PCA based technique to localize the object. Node orientation and object geometry are taken into account in the entire process. Simulation results report 99% efficiency and an error ratio of 0.01 (around 1 ft) when compared to other popular techniques.

Supporting Information

S1 Dataset. 10 Images of a Rechargable battery taken at distance of 1 up to 10 feet’s.

https://doi.org/10.1371/journal.pone.0141558.s001

(RAR)

S2 Dataset. 10 images of a 12 × 8 Inches box taken at a distance of 1 upto 10 feet’s.

https://doi.org/10.1371/journal.pone.0141558.s002

(RAR)

S3 Dataset. 10 images of a 4 × 8 Inches box taken at a distance of 1 upto 10 feet’s.

https://doi.org/10.1371/journal.pone.0141558.s003

(RAR)

S4 Dataset. 10 images of a brown color bag taken at a distance of 1 upto 10 feet’s.

https://doi.org/10.1371/journal.pone.0141558.s004

(RAR)

S5 Dataset. 10 images of a black color bag taken at a distance of 1 upto 10 feet’s.

https://doi.org/10.1371/journal.pone.0141558.s005

(RAR)

S6 Dataset. 10 images of an umbrella taken at a distance of 1 upto 10 feet’s.

https://doi.org/10.1371/journal.pone.0141558.s006

(RAR)

S7 Dataset. 10 images of a Paint Can taken at a distance of 1 upto 10 feet’s.

https://doi.org/10.1371/journal.pone.0141558.s007

(RAR)

S8 Dataset. 10 images of a Cloth bin taken at a distance of 1 upto 10 feet’s.

https://doi.org/10.1371/journal.pone.0141558.s008

(RAR)

S9 Dataset. 10 images of a shopping bag taken at a distance of 1 upto 10 feet’s.

https://doi.org/10.1371/journal.pone.0141558.s009

(RAR)

S10 Dataset. 10 images of a Bunch of Electrical wires taken at a distance of 1 upto 10 feet’s.

https://doi.org/10.1371/journal.pone.0141558.s010

(RAR)

Author Contributions

Conceived and designed the experiments: YAR MT. Performed the experiments: YAR MT. Analyzed the data: YAR MT OUK. Contributed reagents/materials/analysis tools: YAR. Wrote the paper: YAR MT OUK. Simulation setup: YAR MT.

References

  1. 1. Akyildiz IF, Melodia T, Chowdhury KR. A Survey on Wireless Multimedia Sensor Networks. Comput Netw. 2007 Mar;51(4):921–960. Available from: http://dx.doi.org/10.1016/j.comnet.2006.10.002.
  2. 2. Costa DG, Silva I, Guedes LA, Vasques F, Portugal P. Availability Issues in Wireless Visual Sensor Networks. Sensors. 2014;14(2):2795–2821. Available from: http://www.mdpi.com/1424-8220/14/2/2795. pmid:24526301
  3. 3. Liu X. A Survey on Wireless Camera Sensor Networks. In: Li S, Jin Q, Jiang X, Park JJJH, editors. Frontier and Future Development of Information Technology in Medicine and Education. vol. 269 of Lecture Notes in Electrical Engineering. Springer Netherlands; 2014. p. 1085–1094. Available from: http://dx.doi.org/10.1007/978-94-007-7618-0_106.
  4. 4. Tran DA, Nguyen T. Localization In Wireless Sensor Networks Based on Support Vector Machines. Parallel and Distributed Systems, IEEE Transactions on. 2008 July;19(7):981–994.
  5. 5. Chandrasekhar V, Seah WK, Choo YS, Ee HV. Localization in Underwater Sensor Networks: Survey and Challenges. In: Proceedings of the 1st ACM International Workshop on Underwater Networks. WUWNet’06. New York, NY, USA: ACM; 2006. p. 33–40. Available from: http://doi.acm.org/10.1145/1161039.1161047.
  6. 6. El Assaf A, Zaidi S, Affes S, Kandil N. Range-Free Localization Algorithm for Anisotropic Wireless Sensor Networks. In: Vehicular Technology Conference (VTC Fall), 2014 IEEE 80th; 2014. p. 1–5.
  7. 7. Gao D, Zhu W, Xu X, Chao HC. A hybrid localization and tracking system in camera sensor networks. International Journal of Communication Systems. 2014;27(4):606–622. Available from: http://dx.doi.org/10.1002/dac.2492.
  8. 8. Bouwmans T. Traditional and recent approaches in background modeling for foreground detection: An overview. Computer Science Review. 2014;11–12(0):31–66. Available from: http://www.sciencedirect.com/science/article/pii/S1574013714000033.
  9. 9. Li W, Portilla J, Moreno F, Liang G, Riesgo T. Improving target localization accuracy of wireless visual sensor networks. In: IECON 2011 - 37th Annual Conference on IEEE Industrial Electronics Society; 2011. p. 3814–3819.
  10. 10. Nguyen T, Jeong Y, Trinh D, Shin H. Location-aware visual radios. Wireless Communications, IEEE. 2014 August;21(4):28–36.
  11. 11. O’Rourke D, Moore D, Wark T. Demo abstract: Fusion of audio and image information for efficient object detection and capture. In: Information Processing in Sensor Networks, 2009. IPSN 2009. International Conference on; 2009. p. 401–402.
  12. 12. Alaei M, Barcelo-Ordinas JM. A hybrid cooperative design for energy-efficient surveillance in Wireless Multimedia Sensor Networks. In: European Wireless, 2012. EW. 18th European Wireless Conference; 2012. p. 1–7.
  13. 13. J R Martinez-de Dios AJG, Ollero A. Localization and Tracking Using Camera-Based Wireless Sensor Networks. In: Thomas C, editor. Sensor Fusion - Foundation and Applications. InTech; 2011.
  14. 14. Öztarak H, Akkaya K, Yazici A. Providing Automated Actions in Wireless Multimedia Sensor Networks via Active Rules. In: Gelenbe E, Lent R, Sakellari G, editors. Computer and Information Sciences II. Springer London; 2012. p. 185–190. Available from: http://dx.doi.org/10.1007/978-1-4471-2155-8_23.
  15. 15. Tavli B, Bicakci K, Zilan R, Barcelo-Ordinas J. A survey of visual sensor network platforms. Multimedia Tools and Applications. 2012;60(3):689–726. Available from: http://dx.doi.org/10.1007/s11042-011-0840-z.
  16. 16. Huang Q, Selvakennedy S. A Range-Free Localization Algorithm for Wireless Sensor Networks. In: Vehicular Technology Conference, 2006. VTC 2006-Spring. IEEE 63rd. vol. 1; 2006. p. 349–353.
  17. 17. Langendoen K, Reijers N. Distributed Localization in Wireless Sensor Networks: A Quantitative Comparison. Comput Netw. 2003 Nov;43(4):499–518. Available from: http://dx.doi.org/10.1016/S1389-1286(03)00356-6.
  18. 18. Paul AK, Sato T. Effective Data Gathering and Energy Efficient Communication Protocol in Wireless Sensor Network. In: Wireless Personal Multimedia Communications (WPMC), 2011 14th International Symposium on; 2011. p. 1–5.
  19. 19. Wang C, Xiao L. Locating Sensors in Concave Areas. In: INFOCOM 2006. 25th IEEE International Conference on Computer Communications. Proceedings; 2006. p. 1–12.
  20. 20. Liu C, Wu K. Performance evaluation of range-free localization methods for wireless sensor networks. In: Performance, Computing, and Communications Conference, 2005. IPCCC 2005. 24th IEEE International; 2005. p. 59–66.
  21. 21. Paul AK, Sato T. Detour Path Angular Information Based Range-Free Localization in Wireless Sensor Network. Journal of Sensor and Actuator Networks. 2013;2(1):25–45. Available from: http://www.mdpi.com/2224-2708/2/1/25.
  22. 22. Hu W, Tan T, Wang L, Maybank S. A survey on visual surveillance of object motion and behaviors. Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on. 2004 Aug;34(3):334–352.
  23. 23. Wang L, Huang K, Huang Y, Tan T. Object detection and tracking for night surveillance based on salient contrast analysis. In: Image Processing (ICIP), 2009 16th IEEE International Conference on; 2009. p. 1113–1116.
  24. 24. Lee JK, Kim JJ, Jun MS. Design and Realization for Security System of Facility Management Based USN. In: Ubiquitous Computing and Multimedia Applications (UCMA), 2011 International Conference on; 2011. p. 30–34.
  25. 25. Vittal KP, Ajay Pai P, Ajay Shenoy B, Rao CHS. Computer Controlled Intrusion-Detector and Automatic Firing-Unit for Border Security. In: Computer and Network Technology (ICCNT), 2010 Second International Conference on; 2010. p. 289–293.
  26. 26. Kimura N, Latifi S. A survey on data compression in wireless sensor networks. In: Information Technology: Coding and Computing, 2005. ITCC 2005. International Conference on. vol. 2; 2005. p. 8–13 Vol. 2.
  27. 27. De D, Gupta MD, Sen A. Energy Efficient Target Tracking Mechanism using Rotational Camera Sensor in {WMSN}. Procedia Technology. 2012;6(0):674–681. 2nd International Conference on Communication, Computing & Security [ICCCS-2012]. Available from: http://www.sciencedirect.com/science/article/pii/S2212017312006263.
  28. 28. Oztarak H, Akkaya K, Yazici A. Lightweight Object Localization with a Single Camera in Wireless Multimedia Sensor Networks. In: Global Telecommunications Conference, 2009. GLOBECOM 2009. IEEE; 2009. p. 1–6.
  29. 29. Medeiros H, Park J, Kak AC. Distributed Object Tracking Using a Cluster-Based Kalman Filter in Wireless Camera Networks. Selected Topics in Signal Processing, IEEE Journal of. 2008 Aug;2(4):448–463.
  30. 30. Boulanouar I, Rachedi A, Lohier S, Roussel G. Energy-aware object tracking algorithm using heterogeneous wireless sensor networks. In: Wireless Days (WD), 2011 IFIP; 2011. p. 1–6.