Task 1.1 – Project Management and Coordination. (Start:M1, End:M48)

The technical and administrative management of the project including all activities related to: 1) organization of internal tasks/events; 2) monitoring of tasks and milestones; and 3) management of resources and funding. The management team will exploit state-of the-art computer tools for project management. Internal management reports will be written every 3 months to keep track of project accomplishments, resources, deadlines, etc. The MB will serve as the interface with the EC for all issues concerning the project, it will supervise the partners for issues regarding payments, the timely delivery of documents, and will represent the project during external events. The MB will be responsible for quality control and overall assessment.


Task 2.1 – Definition of contemporary security and privacy requirements for the social web, survey of security and privacy enhancing web-based tools, and survey of the research state of the art. (Start:M1, End:M4):

Extensive taxonomy of security and privacy issues in OSNs. Special focus on threats that render minors and other vulnerable population groups susceptible to abusive behaviors. Compilation of state-of-the-art methods for mitigating such threats and short-list of the most appropriate.

Task 2.2 – Measurements and test data preparation. (Start:M5, End:M8):

Requirements will be extracted through our own measurement and analysis of real OSN data. The analysis will: a) validate the results of the literature survey, e.g., quantify exactly the severity and occurrence frequency of different security and privacy problems; and b) prepare a test input for the development of security and privacy enhancing tools and the testing and piloting activities of subsequent WPs

Task 2.3 – High-level architectural design. (Start:M9, End:M12):

Fleshing out a high-level architectural design of the entire solution, including a breakdown into different functional components connected by clearly identified interfaces. The architecture will define the interface for importing data from different OSNs. Definition of browser add-on architectures for identifying and protecting sensitive OSN content, for identifying fake activity, and for detecting abusive behaviors. Modular and incremental design for integrating security and privacy enhancing solutions that require only the installation of appropriate add-ons  and the existence of a back-end infrastructure, without the need of modifications by OSN providers.


T3.1 User and societal aspects of Security and Privacy of sensitive OSN content (Start:7, End:18):

Understanding users expectations, needs and concerns with regards to usability, security and privacy of sensitive OSN content requires the investigation of various issues from a socio-psychological perspective:
How existing users of social networks perceive trust, security and privacy?  What are users’ concerns, if any, regarding trust, security and privacy?; What are the social and cultural influences of user perceptions regarding trust, security and privacy? How easy would it be for them to accept the implicit protection of sensitive content through their biometric characteristics like face and fingerprints?

T3.2  Usability of existing Security and Privacy OSN systems (Start:13, End:24):

Study ways to improve user experience and user behaviour when faced with security and privacy risks. How should we design browser plugins that serve this purpose and are usable?  To address these issues the usability and user-experience of existing security and privacy systems (e.g., FB privacy settings or fake account reporting) for OSNs will be evaluated using query based techniques (questionnaires, interviews, focus groups) and through usability studies (observations, eye-tracking studies).

T3.3 Sentiment Analysis of OSN content (Start:19, End:30):

The user studies data from T3.2 will be complemented with content and sentiment analysis of OSN content.  The understanding of how people feel about things they are talking about needs the development of accurate and sophisticated sentiment analysis approaches. Existing challenges, such as the recognition of sarcastic or ironic content, should be addressed for a successful capturing of people’s sentiments. Finally, the adoption of such approaches can contribute in solving important open issues (e.g. the timely recognition of people with suicidal tendencies or criminal behaviour) and can significantly contribute to the identification and management of critical situations.

T3.4 Guidelines for usable user interfaces for content detection and protection, fake activity suppression, and malicious behaviour avoidance (Start: M31, End: M48):

Building on the findings of T3.1, T3.2 and T3.3 and on the experience acquired in the piloting phase we will propose OSNs design guidelines that: (a) discourage users from accepting friend requests from malicious accounts, to flag spam and malicious accounts when appropriate, and to prevent them for exhibiting unscrupulous behaviours themselves (b) detect, warn against and provide ways of avoiding sensitive OSNs content. These design guidelines will be supported both by theoretical findings (past published work) but also from empirical findings as collected and analysed in T3.1, T3.2 and T3.3.


T4.1 User profiling to detect and prevent malicious and criminal activities. (Start:M7, End:M18):

Detection of problematic and aggressive user behaviour. Understanding how users are exposed to predators and cyberbullying. Using features extracted from user profiles, NLP of message content, social network structure, and other relevant information, we will build classifiers that, for example, can notify parents if their children are being cyberbullied or preyed upon. This task will exploit data obtained from the corresponding browser add-on (T4.4) and from the Telefonica-owned Tuenti OSN.

T4.2 Sentiment and affective analysis on individual and collective basis. (Start:M13, End:M24):

Understanding user behavior by primarily analyzing online textual resources and consists of: a) sentiment analysis (or opinion mining): computation study of sentiments/opinions expressed in text; and b) affective analysis: tracking of people’s specific emotional states (e.g. anger, disgust, joy), allowing more in-depth understanding of their psychological and emotional state. Feature extraction techniques will be devised in order to consider the OSN entities’ interaction patterns (such as average inter-arrival time of interactions, variances, etc.).  We will propose advanced techniques in order to analyse and compare individual versus collective behaviours of OSN user groups to early detect suspicious user behaviours which will then be combined to determine “suspiciousness” scores and ranks.

T4.3 OSN malicious users time-dependent detection. (Start:M19, End:M36):

Modelling of time-dependent interactions and activities of social network users, and the application of graph mining and text processing methodologies to detect latent patterns of activity by users/entities. The aim of this task is to discover multiple patterns indicative of predatory (sexual predators or cyberbullying) behaviour over time by analysing OSN user interactions. Advanced data mining and analytics techniques will be proposed in order to leverage the OSN users’ concurrent activities that indicate behavioural variations and spikes with emphasis on advancing the state of the art on anomaly detection in OSNs.

T4.4  Design and implementation of browser add-on that detects distressed users and aggressive behavior. (Start:M25, End:M36):

Development of a browser add-on that extracts information from the user’s newsfeed and messages, and associates the extracted info with data being analysed in the back-end OSN data analysis stack. The functionalities of the add-on will be: a) perform sentiment and affective analysis in the front-end; b) associate this analysis with back-end derived results; and c) inform users (or their parents) of when they are experiencing or are about to experience distressing criminal behaviour.


T5.1 Detection of false information propagation. (Start:M13, End:M24):

Study of the various ways in which real and false information propagates, and develop techniques able to detect false information before it spreads. We will focus on the use cases of false information that relates to cyberbullying (e.g., false rumors or doctored images about teenagers).

T5.2 Detection of fake identity and reputation in online social networks. (Start:M19, End:M30):

Development of techniques to automatically detect social network accounts that build fake identity and reputation on the network.  We will focus on identity and reputation misrepresentation by child predators.

T5.3 Understanding and countering fraudulent audience boosting in social networks. (Start:M19, End:M30):

Systematic understanding of malevolent actors behind so-called Facebook like farms and Twitter follower markets, their abilities, resources, and strategies, in order to design effective countermeasures. We will focus on audience boosting used by cyberbullies and sexual predators.

T5.4 Design and implementation of browser plugin that warns users about false information dissemination and identity misrepresentation. (Start:M25, End:M36):

Development of a browser add-on that scans the social network information of a user and associates the extracted info with data being analysed in the back end. The functionalities of the add-on will be the following: a) identify false information that the user may receive; b) detect false information being spread over the network that concerns the user; c) warn the user when he communicates with persons that lie about their identity; d) warn the user when he communicates with persons of bad reputation due to cyberbullying or predation.


T6.1 Automatic detection of sensitive personal content. (Start: M19, End: M30):

Development of content analysis algorithms in order to automatically detect content, including images and videos, that may contain user private data that could eventually pose a risk to the integrity and safety of the user. The main actions taking place during this task are: a) generation of annotated training/tests set that would can be used for training/testing the algorithms to be developed; b) determination of key image processing tasks in relation to the detection of private content (e.g., face recognition, expression recognition, body pose recognition, and nudity detection); c) development of algorithms for each of the aforementioned image interpretation tasks; d) computational linguistics and NLP algorithms to analyse the content of the communication between users (messages or newsfeed posts) for evidence of sensitive content (e.g., address information). Integration of the developed algorithms in a Content Analysis Filter (CAF) installed on user’s browsers, so that an on-line user is warned when content that could potentially contain private data is being uploaded from her browser on an OSN or appears on other users’ newsfeeds.

T6.2 Steganography and Digital Watermarking. (Start: M21, End: M32):

Development of steganography-related techniques that enable users to specify groups of people that can view certain sensitive content. Development of digital watermarking techniques on the sensitive content so that in the event of unauthorized leakage, the culpable parties can be identified and sanctioned.

T6.3 Group encryption and attribute-based encryption. (Start: M21, End: M32):

Development of cryptographic techniques that allows users to specify groups of people that can view certain sensitive content using homomorphic, group and attribute-based encryption.

T6.4 Design and implementation of browser add-on that informs users about sensitive content and enables them to protect it. (Start:M25, End:M36):

Development of browser add-on that detects sensitive content), and enables the user to protect in a user-friendly way. The functionalities of the add-on will be: a) identify user images or messages or newsfeed posts that may be inappropriate for uploading to an OSN without the proper access restrictions; b) warn the user or his parents about the risks of uploading that sensitive content; c) provide usable controls that enables users or their parents to protect such content either by simply defining restrictive OSN privacy settings, or by using steganography and encryption.


T7.1 Initial testing of OSN data analysis algorithms using real data (Start: M37, End:M44):

The implemented algorithms from WP4 and 5 will be fed with real but anonymized OSN data and their output will be logged and analyzed. To capture extreme cases that fall outside our real OSN activity we will also rely on synthetically generated input and simulation.

T7.2  Integration of the software subsystems. (Start: M37, End:M44):

The core software modules from  previous WPs will be integrated in a single browser-based S/W suite in the form of three browser add-ons and large scale OSN data analysis software stack. The add-ons will support FB and Twitter. The user will be offered either an aggregated dashboard from all connected OSNs, or independent dashboards per OSN.

T7.3 Piloting activities. (Start:M41, End:M48):

Based on the results of the previous task, end-users (from the industry, academia or the general public) will be invited to try out the tools and report their findings. Specialized questionnaires will be drafted and end-users will be asked to respond, before, during and at the end of the evaluation process. The piloting activities will contain both test scenarios for the developed tools, as well as analytical results of the responses. The demonstration will take place in two stages. In the first stage the tools will be accessible only to a restricted numbers of users. In the second stage, the ENCASE tools will be released in the wild.


T8.1 Dissemination. (Start:M1, End: M48):

Handling of all the activities related to the scientific dissemination of the results of the project. The project results will be presented to the scientific community through publications in journals, conferences, and workshops, as well as through demos and participation in industrial/commercial exhibitions and congresses. All published material will also appear in the project’s website that will be set up for dissemination purposes. Three workshops will be organized during the project. Joint meetings with other PEOPLE projects will be planned.  CUT, UCL and AUTH will disseminate the scientific results of the project in high-profile scientific venues like ACM CHI, ACM SIGCOMM, USENIX NSDI, IEEE S&P, ACM CCS and ACM Transactions on Information Security. CUT will organize a workshop on usable privacy and security in the social web.  UCL will organize the second workshop of the project on fake/false social web activity.  ROMA 3 will organize the final workshop and an industrial session on user-side deployable tools for content detection and protection in OSNs.

T8.2  Standardization and IP Protection. (Start:M1, End:48):

Ensuring the visibility of the project and its results in child protection organizations and to technical specification groups in the areas of security, privacy, and big data management. The activities of related groups will be closely monitored and members will participate in relevant meetings. Dealing with the protection of the intellectual property produced during the project.

T8.3  Feasibility Studies, Business Model, Market Opportunities, and Exploitation Plan. (Start:M13, End:48):

Analysis of  the results of the project from a business perspective, and identification of business cases for the exploitation of the results. A detailed exploitation plan for each industrial partner will be presented.  INNO will lead the exploitation efforts by participating in industrial trade shows as well as paid editorials to industry magazines. TID and CYR will create marketing material (white papers, product brochures, promotional activities, webminars, etc.) to assist their sales force to implement aggressive sales plans.

Copyright © 2016 ENCASE