Keynote Speakers

Keynote Speakers at ICCCN 2021

Biography:

Prof. Dimitra Simeonidou is a Full Professor at the University of Bristol, the Co-Director of the Bristol Digital Futures Institute and the Director of Smart Internet Lab. Her research is focusing in the fields of high performance networks, programmable networks, wireless-optical convergence, 5G/B5G and smart city infrastructures. She is increasingly working with Social Sciences on topics of digital transformation for society and businesses. Dimitra has been the Technical Architect and the CTO of the smart city project Bristol Is Open. She is currently leading the Bristol City/Region 5G urban pilots. She is the author and co-author of over 600 publications, numerous patents and several major contributions to standards. She has been co-founder of two spin-out companies, the latest being the University of Bristol VC funded spin-out Zeetta Networks, http://www.zeetta.com, delivering SDN solutions for enterprise and emergency networks. Dimitra is a Fellow of the Royal Academy of Engineering, a Fellow of the IEEE and a Royal Society Wolfson Scholar.

===========================================================================

Decision-Making for Uncoordinated User Access to Limited Distributed Resources

Prof. Ioannis Stavrakakis
National and Kapodistrian University of Athens
Greece

Abstract: Information and Communication Technologies enable the generation and dissemination of information that can enhance tremendously our awareness about the environment and its resources. This resource awareness provides enhanced service opportunities but it may also intensify competition for limited, distributed and uncoordinated resources, yielding contention penalties that could render such information provision even counter-productive. Therefore, users interested in such resources need to carefully decide whether it is more beneficiary to compete for such resources or not to compete (and resort to less attractive but eventually more beneficiary alternatives). The case of fully rational decision- making is briefly discussed along with the induced price of anarchy. The effect of the level of information available to the competing users is discussed along with the optimal or “stable” solutions to the competition problem. The full rationality constraint is also relaxed by considering some models for human-driven decision making, reflecting computational/cognitive human limitations or biases. Such models are briefly discussed, especially those which allow for parameterizing the rationality factor (and consequently, user behaviors) and can capture the entire spectrum of behaviors, from fully rational to completely random decision making. The comparative performance of decision making under various rationality levels is discussed and supported by some theoretical as well as experimental results.

Biography:

Ioannis Stavrakakis (IEEE Fellow) is a professor in the Dept of Informatics and Telecommunications, National and Kapodistrian University of Athens (Greece) and has served as its Chair in 2013-16. He has held faculty positions with Northeastern University (1994-99) and University of Vermont (1988-94), following the completion of his Ph.D. studies from University of Virginia, USA (1988). Research interests in networking: social, mobile, ad hoc, information-centric, delay tolerant and future Internet networking; network resource allocation algorithms & protocols, traffic management and performance evaluation; (human-driven) decision making in competitive environments. Has authored over 250 publications, supervised 20 Ph.D. graduates and has been funded by USA-NSF, DARPA, GTE, BBN and Motorola (USA), Greek and EU agencies, including 2 Marie-Curie grants (post docs). He has served in NSF and EU-IST proposal panels and conference organization sponsored by IEEE, ACM, ITC and IFIP. He has served as chairman of IFIP WG6.3 and officer for IEEE Technical Committee on Computer Communications (TCCC) and on the editorial boards of Proceedings of IEEE, ACM/IEEE Transactions on Networking, Computer Communications journals, etc. He has recently been a visiting professor of University Carlos III de Madrid (UC3M) and IMDEA Networks Institute and recipient of a Chair of Excellence Comunidad de Madrid and a UC3M/Santander Chair of Excellence, a visiting professor of Politecnico di Torino and a Mercator Fellow of the German Research Foundation-DFG at the MAKI centre at TU Darmstadt.

===========================================================================

Machine Learning and Security: The Good, The Bad, and The Ugly

Wenke Lee
Georgia Institute of Technology
Atlanta, GA, USA

Abstract: I would like to share my thoughts on the interactions between machine learning and security.
The good:
We now have more data, more powerful machines and algorithms, and better yet, we don’t need to always manually engineered the features. The ML process is now much more automated and the learned models are more powerful, and this is a positive feedback loop: more data leads to better models, which lead to more deployments, which lead to more data. All security vendors now advertise that they use ML in their products.
The bad:
There are more unknowns. In the past, we knew the capabilities and limitations of our security models, including the ML-based models, and understood how they can be evaded. But the state- of-the-art models such as deep neural networks are not as intelligible as classical models such as decision trees. How do we decide to deploy a deep learning-based model for security when we don’t know for sure it is learned correctly? Data poisoning becomes easier. On-line learning and web-based learning use data collected in run-time and often from an open environment. Since such data is often resulted from human actions, it can be intentionally polluted, e.g., in misinformation campaigns. How do we make it harder for attackers to manipulate the training data?
The ugly:
Attackers will keep on exploiting the holes in ML, and automate their attacks using ML. Why don’t we just secure ML? This would be no different than trying to secure our programs, and systems, and networks, so we can’t. We have to prepare for ML failures. Ultimately, humans have to be involved. The question is how and when? For example, what information should a ML-based system present to humans and what input can humans provide to the system?

Biography:

Wenke Lee is a Professor of Computer Science, the John P. Imlay Jr. Chair, and the Director of the Institute for Information Security & Privacy at Georgia Tech. His research interests include systems and network security, malware analysis, applied cryptography, and machine learning. He is an ACM Fellow and an IEEE Fellow.