You are here

Selected Speakers

Alexander Duisberg & Christian Winkler

Bird & Bird LLP & mgm technology partners GmbH

Alexander Duisberg is a partner of Bird & Bird in Munich, handling commercial, digital transformation and data protection issues. He is a member of our International Tech & Comms Sector Steering Group and leads of our Software & Services Industry initiative.

Alexander is considered a leading expert on a wide range of contentious and non-contentious matters of technology law, including agile development, Big Data, cloud migration, data protection and security, digital transformation projects, Internet of Things, Industry 4.0, open source, outsourcing, software and services, and complex technology projects. His client base comprises suppliers and customers in a range of industries, in particular within the automotive, mechanical engineering, energy and insurance sectors. Alexander is an appointed panelist to the WIPO Arbitration and Mediation Centre, and has deep experience in domestic and international arbitrations on technology disputes.
He is a recognized thought leader and government advisor to the German “Trusted Cloud” project as well as the “Smart Data” project. He is a frequent speaker on national and international events, and contributes regularly to publications on Tech-related topics. Alexander speaks German, English, Italian, Spanish and French.
Christian Winkler founded his first software company while finishing his Ph.D. He has worked for 20 years with Internet technologies. Recently, he has focused on working with large amounts of data or many users. As big data applications become more and more popular, there are a lot of interesting projects. Data does not have to be persisted only, but many aggregates have to be calculated and specific data must be found.
Therefore he concentrates on intelligent algorithms like machine
Learning which open a whole new view for interpretation. Often this needs a sophisticated visualization, so he is also interested in these frontend technologies.

Automated driving – data security, data privacy and liability questions in real life use cases

No other topic entails such a variety of in practice, technically legally complex questions. The question of data ownership alone (“Who ‘owns” the data?”) is legally unsettled and can only be solved through robust contractual agreements. The entire eco-system of a data driven economy depends on this, in order to enable market participants with very different objective to participate and put their business models into effect.
Thus the creation of user profiles is closely connected with this - not always, but mostly – and hence linked to the issue of data protection. The “Basic principles of data protection for connected vehicles” developed by the VDA, including a definition of data categories and graded levels of data protection relevance, illustrates in an impressive manner how, inter alia, the requirements of “privacy by design” can be put into practice. Cloud-based services in related international data transfers increase the need for developing legally reliable solutions.
Furthermore, the success and the introduction of autonomous driving are crucially dependent a sound and reliable framework addressing liability. Next to the regulatory framework regarding the driver’s- and vehicle owner’s liability, the questions and solutions based in contract and with view to product liability are of fundamental concern.
In the real world, several applications are already in a testing phase, even in field tests. Some of them are shown in more detail below:

  1. Detecting traffic signs
  • Speed limits
  • Time-dependent traffic signs
  • Speeding in restricted areas

        2. Dangerous driving conditions

  • Areas with frequent traffic jams
  • High acceleration or heavy braking
  • Critical values from vehicle sensors (lateral accelerations)

        3. Detection of driving „sessions“

  • Possible and technically feasible
  • Problematic as this can be related to individuals
  • Technically interesting for evaluating certain parameters (average speed, curve detection, traffic jams, stop times etc.)

The joint presentation of the international law firm Bird & Bird together with the technology consulting company mgm technology partners discusses and demonstrates real life use-cases combined with possible (technical and legal) solutions.

Anat Elhalal

Data Tech Lead at the Digital Catapult

Catalysing Sector Advantage by Collaborative Data Exploitation

Data value is created when it is exploited to generate actionable insight that drives business value. Most organisations succeed in exploiting their own data to create actionable descriptive insight and sometimes predictive and prescriptive insight. Very few extend this model to collaboratively share their proprietary data with that of another organisation to leverage the combined insight collectively. The Data Catalyser is built on the premise that enabling more organisations to collaboratively exploit their data will create unforeseen opportunities for all parties involved. Organisations contributing data could uncover innovations. Those analysing the data would find new business opportunities.

Arild Waaler

University of Oslo

Arild Waaler is professor in logic and computation at University of Oslo. He coordinates the FP7 IP project Optique and is director of the Centre for Scalable Data Access newly funded by the Research Council of Norway.

Taking Ontology-Based Data Access out of the Laboratory: experiences from a use case deployment at Statoil

The Optique project is a 4 year IP that aims to take Ontology-Based Data Access "out of the laboratory" and deploy it on large-scale use cases from European industry. This presentation summarises the experiences from the first three years of deployment at the oil & gas company Statoil.

Carl Per Magnus Wicén


Carl Per Magnus Wicén is a an experienced officer from Command Control and Information Technolgy. He has served as Regional Manager for WM-data (Swedish IT Company), Section Manager at Defense Material Administration (FMV), Marketing Director at Rote Consulting AB, Strategos Consulting AB and Termiska Process System AB (manufacturer of heat power plants). He is the current owner of Subitus Consulting AB and Senior Consultant for Offentlig-Privat Samverkan (Public-Private Cooperation Advisory) and Offentlig upphandling (Public Procurement Advisory). He is politically active.

The Information Society CityLab by KISEA

KIRUNA Information Society Exhibition Area (KISEA) will facilitate and organize the world’s first and largest CITYLAB for real life research, development, implementation, testing and demonstrating of Information Society capabilities. This new city will be an exciting exhibition and research area, where new products and services will meet its customers for the first time.
Kiruna CITYLAB will become an incubator for the knowledge required to create the future information society in the service of citizens. Kiruna CITYLAB urges national and international experts, companies and researchers to showcase their innovations.

Frank Bensberg and Gandalf Buscher

Osnabrück University of Applied Sciences, Hochschule für Telekommunikation Leipzig

Prof. Dr. Frank Bensberg
Frank Bensberg studied business administration with a focus on business informatics and marketing. From 1994 to 2000 he worked as a research assistant and received a doctorate on the topic of web log mining in internet-based markets from the University of Münster, Germany. Subsequently, he was habilitated in 2009 and worked as professor for business information systems at the Hochschule für Telekommunikation Leipzig (HfTL).
Since April 2015 Frank Bensberg is professor for business informatics at the Osnabrück University of Applied Sciences.
B. Sc. Gandalf Buscher
Gandalf Buscher received a bachelor degree in business informatics from the Hochschule für Telekommunikation Leipzig (HfTL) in dual cooperation with Deutsche Telekom AG in the field of corporate ERP service management. Since September 2012 he takes part in a part-time master programme in business informatics at the HfTL specializing on data analytics and big data. Currently, he is working on his master thesis in the domain of job mining and is employed as system integrator for product development at the Deutsche Telekom AG.

The Impact of Big Data on the Job Market – Results from a Job Mining Study

Job adverts in online portals provide an open data source to identify skill requirements in the employment market. We designed a job mining solution which uses text analysis techniques to gain detailed insight into current skill requirements for big data jobs. Based on the analysis of 80 014 job adverts collected in the period from June 2014 to April 2015, we show which kinds of big data jobs have evolved and which skill sets are typical of these. This information is valuable for educational decision makers to develop training services fitting the needs of the market.

Ivar Østhassel, Ling Shi, Nikolay Nikolov, Arne-Jørgen Berre, Dumitru Roman

Statsbygg, SINTEF

Ling Shi works as a senior IT advisor at Statsbygg, Norway. She holds an MSc in Computer Science from 2005 at University of Oslo and is an M.Phil. in System Dynamics from 1999 at University of Bergen. Her interests include integration, integration rules and semantic technology. Currently she is also a part-time Ph.D. student at University of Oslo.
Nikolay Nikolov is a master of science at SINTEF (Norway). He holds a joint Erasmus Mundus M.Sc. degree in Service Engineering from Stuttgart University, University of Crete and Tilburg University from 2014. He has been doing research and software development related to big data, the semantic web and model-driven designs.
Dr. Arne-Jørgen Berre is chief research scientist at SINTEF (Norway) working in the area of interop-erable systems. He has served as technical manager and project manager for a number of IST projects related to semantically interoperable systems, environmental and geospatial services. He is associate professor II at the Informatics department at the University of Oslo, Norway.
Dr. Dumitru Roman works as a Senior Research Scientist at SINTEF (Norway). He is active in the Open Linked Data field where he currently acts as the project coordinator of the DaPaaS and pro-DataMarket projects---both dealing with innovative products and services using Open Data. He holds an adjunct associate professorship at the University of Oslo, Norway.

State of Estate: An Open Data-based Reporting Service for the State-owned Properties in Norway

Statsbygg is the public company responsible for reporting the state-owned property data in Norway. Such reporting is currently a resource demanding and erroneous process. The State of Estate (SoE) ser-vice aims to build a new reporting service by sharing, integrating and utilizing (open and closed) cross-sectorial property data, and to increase the transparency and accessibility of the data from public sectors for further innovation. The proposed presentation will give an overview of status of the SoE reporting service – the first of its kind in Norway and with possibility of replication in other countries – with a specific focus on its key innovations and technical and organizational challenges and solutions con-cerning the use of open data in the SoE service.

Kamel Gadouche, Franck Cotton, Alexandre Marty, Nawres Guedria


Kamel Gadouche
Kamel is the director of the French research data center (called CASD) hosted by the GENES Institution (French national statistical authority: Group of National Schools for Economics and Statistics -GENES). He is in charge of promoting the use by researchers of very detailed data owned mainly by public administrations (statistical institutes, ministry of finance, health data...) in a secure way. Before that, he workedas IT manager at Genes and as project manager at INSEE, the French National Statistical Institute.
Franck Cotton
Franck is technology evangelist at INSEE, the French National Statistical Institute, where he started as a business statistician, then becameproject manager for the development of a classifications management system, before taking the responsibility of IT infrastructure and IT security. He is particularly active in the field of metadata standards, linked data and international cooperation.
Alexandre Marty
Alexandre is the Data Science manager at CASD, the French Secure Remote Access Center. With an engineering background, he formerly worked in IT start-ups in Canada. He is now in charge ofthe Big Dataplatformof CASD. He is responsible of both technical and general aspects of the platform. He is also involved in European projectssuch asthe Big Data task force created by Eurostat.
Nawres Guedria
Nawres is a datascientist, she has integrated CASD (the French Secure Remote Access Center) in 2014after graduating from school. She works on the Big Data platform of CASD. She takes part inthe development and maintenance of the IT infrastructure and assists


In 2014, Genes and Insee set up a Big Data platform in the highly secured area of the French RDC, called CASD-TeraLab. Its architecture allows memory-intensive and massively parallel treatments on large data sets, including highly confidential data. A varied collection of software tools and a catalog of useful data sets are made available on the platform. CASD-TeraLab supports the development of a large range of research and industrial projects, and contributes to the emergence of a data science community. This paper gives an overview of the CASD-TeraLab platform architecture and presents a use case. The presentation will include a brief demonstration of the platform and more use cases.

Mark Forster


Mark Forster
PhD Chemistry, University of London, in the field of Nuclear Magnetic Resonance (NMR).
Around 35 academic publications in the fields of studies of structure and dynamics of biomolecular systems studied by NMR and computational methods. Co-discoverer of the solution structure of the biologically active macromolecule heparin. Expertise in computational analysis of chemical and biological data, software development and data management. Co-editor of published book on application of open source software to pharmaceutical and life science research.

Chair of the ELIXIR Industry Advisory Committee (IAC). Chair of the Scientific Advisory Committee (SAB) for OpenPHACTS. Lead organiser of two Wellcome Trust workshops on research informatics. Industry experience in commercial software development, as well as former IS (Information systems) and current R&D team leader for Syngenta chemical research.

Open Innovation in the Agri-food industry: the utility and importance of public data in life science R&D

The utility and importance of public data to commercial life science R&D organisations will be described and explained. The key examples will pertain to the agri-biotech sector, such as agrochemicals, plant breeding and plant biotechnology, with extended applications to the pharmaceutical sector made clear. The research supported by these data resources has important future ramifications in terms of food security and human health and ultimately the ability of industry to develop solutions to some of the grand challenges faced by the world today.

Michael Bültmann

Managing Director HERE Deutschland GmbH and Nokia Technology GmbH

Since 2008 Michael Bültmann has been Managing Director at Nokia, where he is amongst other issues responsible for working with regulatory entities and governments across the world on behalf of the company. His work primarily focuses on Nokia’s mapping division, HERE.
Prior to that position Michael worked as an attorney in Berlin and Paris and as senior legal counsel for major companies in Germany. In addition he was a visiting lecturer for International Business Law at university of applied science Lüneburg, Germany.
He studied Law in Heidelberg, Germany and St. Gallen, Switzerland. Michael is also board member at SRiW (Association of Self-Regulation in the Internet) and BITKOM, Federal Association for Information, Technology, Telecommunications and New

Data Ownership in Mobility

When you consider mobility today, vast quantities of data are coming from a multitude of different sources ranging from transport and road authorities to infrastructure, devices and vehicles. This data is collected by both private companies and public authorities. To overcome the challenges around mobility in dense urban environments, new forms of collaboration between all stakeholders are required with respect to data and the discussion should be more about enabling temporary access to data and use of information and less about owning information.
Data-intensive business models cannot be commercially sustainable unless there is a diligent consideration of how data is stored, handled and used while respecting the rights of the consumer.

Mohamed Boukhebouze

CETIC Research Center

Dr. Mohamed Boukhebouze is a R&D project manager at CETIC. Previously, Mohamed Boukhebouze received his Master degree in 2006, and his PhD degree in 2010, both in computer science at the University of Lyon, France. From 2009 to 2012, he was a post-doctoral researcher at the Faculty of Computer Science of the University of Namur. His research interests include business process management, big data, data & process mining, and user interface modelling. Mohamed Boukhebouze was involved in several research projects (including ITEA 2 UsiXML, and eHealth for Citizens). Currently, Mohamed Boukhebouze is the coordinator of the QualiHM project, which aims to develop a requirement engineering toolkit for efficient user interface design. He is also the leader of the data mining work package of the Inograms project, which aims to develop an advanced data mining system for failures prediction of the rail equipment. Mohamed Boukhebouze authored several research papers published in international conferences and journals (e.g. RCIS 2015, ICSOC 205, ICWE 2012, IJBPIM 2011).
He is also referee for international conferences and journal (including ANT, NISS, KER).

Towards an On-board Personal Data Mining Framework For P4 Medicine

This presentation proposes an on-board personal data-mining framework for P4 medicine. The framework enables diseases prediction, risks prevention, personalised
intervention and patient participation in healthcare. To achieve these objectives, the proposed framework relies on wearable devices and supports on-board data stream mining, which allows continuous monitoring and real-time decision-making. The proposed framework deals with the resources limitation of the wearable devices, context and resources changes, and data quality issues by relying on distributed data mining, context-aware and resources-aware adaption and probabilistic data mining. The presentation shows how the framework can be used for the epilepsy disease case.

Tomas Pariente Lobo

Atos SA

Veracity: The 4th Challenge of Big Data

Social media poses three major challenges, dubbed by Gartner the 3Vs of big data: volume, velocity, and variety. This research contribution presentation will present deep data and content analytics methods to address the fourth crucial, but hitherto largely unstudied, big data challenge: veracity. Novel cross-disciplinary social semantic methods for computing veracity from the PHEME European project will be presented. PHEME combines document semantics, a priori large-scale world knowledge (Linked Open Data) and a posteriori knowledge and context from social networks, cross-media links and spatio-temporal metadata. Key novel contributions are dealing with multilinguality, modelling rumour dynamics over time, contradiction and misinformation detection, and longitudinal models of users, influence, and trust.

Volker Markl, Asterios Katsifodimos and Christoph Boden

TU Berlin

Volker Markl ( is a Full Professor and Chair of the DIMA Group and Speaker for the Data Analytics Lab ( at TUB as well as director of the Berlin Big Data Center ( In addition, he is the Speaker of a German National Science Foundation (DFG) funded Research Unit called Stratosphere, which continues to (further) develop a next-generation big data analytics platform. Earlier this year, under his leadership, a study on big data challenges and opportunities was conducted for the German Federal Ministry of Economics & Technology (BMWi). To date, he has given over 200 invited talks and published over 80 research papers at world-class scientific venues. His research interests include new hardware architectures for information management, scalable processing and optimization of data programming languages, information processing, and information modeling.

Asterios Katsifodimos ( is a Postdoctoral Researcher working on the Stratosphere Research Project ( in the Database Systems and Information Management (DIMA) Group at the Technische Universität Berlin (TUB). He received his PhD in 2013 from INRIA Saclay and Université Paris-Sud under the supervision of Ioana Manolescu. His PhD thesis focused on materialized view-based techniques for the management of web data. He was a member of the High Performance Computing Lab at the University of Cyprus, where he obtained his B.Sc. and M.Sc. degrees. His research interests include query optimization, large-scale distributed data management, and big data analytics.

Tilmann Rabl is a senior researcher who will join the DIMA Group in August and will be research coordinator of the Berlin Big Data Center ( Furthermore, he is chair of the SPEC Research Group on Big Data, Professional Affiliate of the Transaction Processing Performance Council, and member of the Board of Directors of the BigData Top100 Initiative.

Apache Flink: A fast, scalable, and declarative open-source platform for Big Data Analytics

In this talk, we will present Apache Flink (, an open source software stack for scalable big data analytics developed as top level project of the Apache Software Foundation with numerous European contributors. Having evolved from the Stratosphere project (, Apache Flink’s provides a declarative query language and both: batch and real-time stream processing capabilities. Furthermore, Flink provides treatment of user-defined functions as first-class citizens, automatic program parallelization and optimization, support for iterative programs, and an efficient, scalable execution engine. In this manner, Flink enables data scientists to focus on their respective problem and relieves them from scalable systems programming details.