This project is part of the “Plateformes Logicielles pour l’Informatique Mobile” (Software Platforms for Mobile Computing) course in our last year of Computer Science Engineering studies. It consists of developing a Windows Phone 8 mobile app that leverages the various sensors available on a smartphone, and using Machine Learning techniques to categorize the measured data and detecting different possible states. In our case, the measured information is the geographic location of the device.
[Montana][Perron]
The application contains one main view that serves as a demonstration of the clustering algorithm we used. The sample data used for the clustering is manually entered by telling the device to record its current GPS coordinates (read from its Assisted GPS sensor) to a local database. We use the namespace Windows.Devices.Geolocation to access the current position of the device. We used the AlgLib C# library to compute the coordinates clusters in our application. It implements the well-known K-Means cluster analysis algorithm, which partitions n vectors (in our case, dimension 2 GPS coordinates vectors) into k < n clusters. It is possible to set the number of clusters to look for, and run the algorithm, which will compute the different cluster centers (GPS coordinates). Currently, the results are stored in a local variable, then displayed on-screen. Once the clusters have been computed, the user can ask his phone to locate him and tell him which cluster he is currently located in. In the same way that the position vectors are partitioned, we compare the distance from the current location to each cluster center and select the nearest one to determine where it falls on the partition.
The obtained information is displayed on the screen in the following way:
The interface also shows the last coordinates recorded to the sample data and the number of coordinates saved in the local database.
When the user wants to know which cluster he is currently located in, an alert dialog is displayed indicating the index of the cluster.
Our idea for this project came from our interest for Human-Machine Interactions, and particularly the problem of HMI adaptation based on the usage context (which represents the variables of the User, the Device and the Environment). Since a user doesn’t use his phone the same way when he is at work or at home, for example, his phone should be “intelligent” enough to detect context information (here the environment) and dynamically adapt its features. We thought the easiest way (but still efficient) to detect this information would be to use the user’s location and set geographic areas. Changing area would trigger the UI adaptation. To recognize the different areas, we would use sample data provided by the user and a machine learning algorithm to determine the clusters.
We didn’t have enough time to implement the adaptation, but were able to implement the cluster recognition from the data measured with the GPS sensor.
If we were to add the adaptation part, the current state of the project would represent the machine learning part. Once the clusters detected, the user could assign them a label as well as an action, such as launching a web page. He can then edit whenever he wants the name and add actions to a cluster. It would also be nice to improve the cluster detection method, by using the diameter of each cluster and determine if the user really is inside the cluster, or if he is nowhere. If he is nowhere, the system could ask him if he wants to assign his position to an existing cluster, or create a new one.
In this project, we were able to: