Course Home | Schedule

CS 5090: Computer Science Colloquium
Fall 2020 schedule


Here is this semester's tentative schedule; I will update it as the semester progresses. All assignments will be due via the class Canvas, in accordance with SLU's policies.

All presentations are planned for 3:10-4pm on Mondays (unless indicated otherwise in the table below), and will be conducted via zoom for Fall 2020. You main join here, or with Meeting ID: 976 2491 6478 and Password: 956036.


Date Title Speaker Abstract Bio
August 17 Introduction to the course Erin Chambers
August 24 COSE: Configuring Serverless Functions using Statistical Learning Ali Raza Serverless computing has emerged as a new compelling paradigm for the deployment of applications and services. It represents an evolution of cloud computing with a simplified programming model, that aims to abstract away most operational concerns. Running serverless functions requires users to configure multiple parameters, such as memory, CPU, cloud provider, etc. While relatively simpler, configuring such parameters correctly while minimizing cost and meeting delay constraints is not trivial. In this paper, we present COSE, a framework that uses Bayesian Optimization to find the optimal configuration for serverless functions. COSE uses statistical learning techniques to intelligently collect samples and predict the cost and execution time of a serverless function across unseen configuration values. Our framework uses the predicted cost and execution time, to select the “best” configuration parameters for running a single or a chain of functions, while satisfying customer objectives. In addition, COSE has the ability to adapt to changes in the execution time of a serverless function. We evaluate COSE not only on a commercial cloud provider, where we successfully found optimal/near-optimal configurations in as few as five samples, but also over a wide range of simulated distributed cloud environments that confirm the efficacy of our approach. Ali Raza is a third-year Ph.D. student at Boston University (BU). Currently, his work deals with the orchestration of cloud-resources for applications. He is particularly interested in optimizing the cost and performance of serverless functions and their usage in conjunction with other cloud services. Before joining BU, he was a researcher at the NYU Abu Dhabi, where he worked on improving the web connectivity in developing regions through advanced web-caching techniques.
August 31 Work in the Digital Age Miriam Cherry Professor Cherry will be discussing her coursebook entitled “Work in the Digital Age: Labor, Technology, and Regulation” forthcoming with Wolters Kluwer. In addition, she will speak about recent Future of Work initiatives at the National Science Foundation.

The coursebook focuses on certain technologies: the platform economy, gig work and crowdwork, big data and people analytics, gamification, artificial intelligence and algorithmic management, blockchain technology, drones, and 3D printing. Many jobs are now completely online and work is more global than ever before, enabling more efficient work structures. On-demand platforms connect companies seeking short-term help with workers willing to take on short-term assignments. Rapid technological changes, however, have not come without serious consequences. Issues such as precarious work and the fissuring of work have become problematic. Many online or remote jobs are subject to data collection and surveillance, leading to concerns about privacy, data security, off duty conduct, and employment discrimination by algorithm. These rapid technological developments have led many to question whether there even is a “future of work” or whether automation, algorithmic management, big data, and robots will lead to technological unemployment.

The book provides perspectives on these new and emerging technologies from employers, unions, technology workers, national courts and governments, and international organizations. Throughout, the questions is whether current systems of labor and employment regulation are adequate and appropriate to respond to these new technologies, or whether new systems and structures are necessary. Current policies are important for thinking about shaping the future of work that we want: one that is efficient, equitable, and sustainable.

Professor Cherry is a graduate of Dartmouth College and Harvard Law School. Upon graduation, she clerked for judges on the Massachusetts Supreme Court and the United States Court of Appeals for the Eighth Circuit. She taught at several law schools before joining the faculty of SLU Law in 2010. She is the Co-Director of the Wefel Center for Employment Law, and the Associate Dean of Research and Engagement at the Law School. Professor Cherry teaches Employment Law, Employment Discrimination, a Seminar on the Future of Work, and an introductory class on Contract Law.
September 7 UAV-satellite spatio-temporal data fusion and deep learning for yield prediction Vasit Sagan In this work, we present a concept of UAV and satellite spatio-temporal data fusion for crop monitoring, specifically plant phenotyping and yield prediction. we show that (1) spatial-temporal data fusion from airborne and satellite systems provide effective means for capturing early stress; (2) UAV data can complement the limitations of satellite remote sensing data for field-level crop monitoring, addressing not just mixed pixel issues but also filling the temporal gap in satellite data availability; and (3) spatial-temporal gap-filling enables predicting yield more accurately using data collected at optimal growth stages (e.g., seed filling stage). The concept developed in this paper also provides a framework for accurate and robust estimation of plant traits and grain yield and delivers valuable insight for high spatial precision in high-throughput phenotyping and farm field management. Dr. Vasit Sagan, Associate Professor of GIScience, Faculty Director of Geospatial Institute at Saint Louis University (SLU). He teaches an array of courses in GIS, remote sensing, geospatial methods, and GIS programming. His active research focuses on geospatial computer vision (photogrammetry, remote sensing, imaging science) developing state-of-the-art remote sensing and GIS tools, AI/machine learning, sensor/information fusion, and geospatial methods to study food and water security with a specific focus on plant phenotyping, seed composition, yield prediction, and bioenergy.
Sept. 14 Data Who? Kyle Sykes The industry is filled with many “data _____” job titles, so deciphering them all can be confusing. How many ways can you realistically work with data? Are some of these actually the same? Which ones are someone with a CS degree prepared for? After this talk you should have some idea of what the different data-related roles are in the industry. We'll cover a lot of the common “data” related job roles and attempt to make some sense of it all. Kyle Sykes graduated from SLU in 2016 with a Ph.D. in Mathematics. His research area was in the field of computational topology/geometry. He has worked as a Data Scientist in the defense and healthcare industries and is currently a Data Engineer at 1904labs.
Sept. 21:
Note that this is co-located with Parks' graduate seminar, so email to get the correct zoom link
DeepRank2: Utilizing Computer Vision Techniques to Improve Protein Model Quality Assessment using Inter-Residue Distance Prediction and Deep Learning. Jie Hou Residue-residue contact prediction and deep learning have demonstrated the effectiveness in improving the protein model quality assessment (QA). The deep learning techniques have significantly enhanced the inter-residue contact prediction, and further advanced the inter-residue distance prediction for protein sequence. Moreover, deep learning network has shown the great potentials for effectively integrating the power of multiple complementary QA metrics as well as the structural constraints derived from the contact and distance predictions. The DeepRank1.0 method was blindly tested and ranked as one of the best predictors in selecting models in a community-wide, worldwide competition for protein structure prediction (CASP13). As an improved version, we adopt several image similarity metrics used in the field of computer vision that aim to fully utilize the inter-residue contact/distance constraints for predicting the global quality of the protein model. Several well-studied image similarity metrics for distance evaluation include: GIST Descriptor, Oriented FAST and Rotated BRIEF (ORB), PHASH, PSNR & SSIM, Pearson correlation (PCC), and root mean square error (RMSE). The method was benchmarked on the CASP13 dataset and showed a significant improvement compared to the individual QA methods used to generate input features. The results indicate that deep learning holds the key for protein contact distance prediction and protein quality assessment. Bio: Dr. Jie Hou is an assistant professor at the computer science department of the Saint Louis University. Dr. Hou earned his Ph.D. degree in 2019 from the University of Missouri, Columbia. Dr. Jie Hou's research interests are in bioinformatics, specifically in the topics of protein structure prediction and Omics data analysis.
Sept. 28 High frequency trading with computers Matthew Belcher High-frequency trading is on the cutting edge of computing, but its secretive nature makes it hard to get accessible information about what it takes to build a modern trading firm. In this talk, 14 year veteran Matthew Belcher will give an overview of the core business of high-frequency trading and the types of computing problems that successful firms need to solve.
Oct. 5
Oct. 6 at 2:15pm CST, colocated with Dr. Esposito's security class (email me for the zoom link)
Storytelling 4 Cybersecurity Rebecca Harness Today, the industry struggles to articulate the most pressing risks facing a business, leading to every security control becoming a critical security control. The end result is an expensive, frustrating, and enigmatic cybersecurity program. With a rapidly changing threat landscape security fatigue can set in quickly. In order to win business support, we must be able to tell a compelling cybersecurity security story readily consumable by all. In this session, we’ll leverage a little Comm Theory 101 and audience-centered delivery techniques to create an influential cybersecurity story with an emotional, relatable hook. My goal for this presentation is to provide a FUN learning experience with a positive message, so that at the end of our discussion the audience is enthusiastic about transforming their cybersecurity strategy into a story with an emotional, relatable hook. Best of all, they’ll be provided with effective techniques to do just that! CISO @ Saint Louis University & SLUCare: Serving Saint Louis University as AVP & Chief Information Security Officer, overseeing the enterprise-wide information security program for the University and SLUCare. I have over twenty years of experience in IT with most of it being in consulting; the last ten focused on InfoSec. With a degree in Marketing & Communications as well, I have a unique background that helped me recognize the importance of this topic. With a significant gap between the number of cybersecurity professionals vs positions available, we're going to have to start communicating better to broader audiences if we're going to be successful as a profession.
October 12
at 3:30pm
Algorithms in Criminal Justice: problems with predictive policing Sorelle Friedler Algorithms are increasingly being used in high-stakes criminal justice settings. Predictive policing systems claim to be able to predict where crime will happen so that police can be deployed to a neighborhood to stop it. Previous work has shown that these systems are susceptible to feedback loops, where police are repeatedly sent back to the same neighborhoods. Why? What could be done to fix it? Should these algorithms be used? We’ll explore these questions by formally modeling the problem using urns, and discuss whether it makes sense to use predictive policing at all. Sorelle Friedler is an Associate Professor of Computer Science at Haverford College and an Affiliate at the Data & Society Research Institute. Her research focuses on the fairness and interpretability of machine learning algorithms, with applications from criminal justice to materials discovery. Sorelle is a Co-Founder of the ACM Conference on Fairness, Accountability, and Transparency (FAccT) and has received multiple grants for her work on preventing discrimination in machine learning. She serves as a member of Philadelphia’s pretrial reform research advisory council and is regularly consulted on the use of algorithms in the public sphere. Sorelle holds a Ph.D. in Computer Science from the University of Maryland, College Park.
October 19 Naveen Naik Sapavath Three layer Game model for improving the payoff of Wireless Network Entities Three-layer game model, interactions among three entities: 1) wireless resource providers (WRPs); 2) mobile virtual network operators (MVNOs); and 3) their subscribed wireless users (i.e., IoT devices), are formulated as strategies to optimize their respective utilities. The WiVi enables WRPs (also known as layer-1 leaders in the three-layer game) to sublease their wireless resources to MVNOs (also known as layer-2 leaders in the three- layer game) through RF slicing and adaptively setting their prices for subleasing. The MVNOs set the optimal competitive prices to attract more end-users/IoT devices (also known as followers in the three-layer game) to maximize their utilities. The end- users (IoT devices) maximize their data rates (i.e., utilities) by meeting the imposed quality of service (QoS) requirements and budget constraints. We investigate formal analysis of the unique- ness and existence of the equilibrium point of the three-layer game. Performance is evaluated using simulation results. Results show that the proposed three-layer game has unique and optimal equilibrium game. The numerical results show maximized utilities for WRPs, MVNOs, and wireless users. NAVEEN Naik SAPAVATH received his Master of Engineering degree from the Indian Institute of Science (IISc), Bangalore, India in 2008. He is a PhD candidate in Electrical and Computer Engineering at Howard University, Washington DC, USA. His research interests include wireless communications and networking for emerging networked systems including cyber-physical systems, Internet-of-Things, software-defined systems, and Game Theory, Machine Learning.
October 26 Building Intelligent Wireless Networks Estefanía Coronado Abstract: Future wireless networks, including WLANs, must follow an AI-native approach towards autonomous management, and become smart, agile, and able to learn from and adapt to the changing environment. In the transition from network sofwarization to network intelligence, AI cannot be treated as an afterthought, but instead must be accounted for from the requirements phase. In this seminar, we will cover the challenges associated to introducing AI in wireless networks, and present two case studies. Such studies will show the evolution in practical WLANs scenarios from the creation of AI silos as a solution to very specific problems, to the design of AI-aided system architectures. Finally, we will elaborate on the research outlook and prospective applications in AI-enabled wireless networks. Bio: Estefanía Coronado is a researcher in the Software Networks area at Fundació i2CAT (Spain). From 2018 to 2020 she was an expert researcher at Fondazione Bruno Kessler (Italy). In 2018, she completed her PhD at the University of Castilla-La Mancha (Spain) on multimedia distribution over SD-WLANs. She received the M.Sc. degrees in Computer Engineering and Advanced Computer Technologies in 2014 and 2015 from the same university. She has published around 25 papers in international journals and conferences, and she is part of the IEEE Edge Automation Working Group. Her current research interests include wireless and mobile communications, MEC systems, network slicing, SDN, NFV, AI-driven networks and automated network management.
November 2 Utilizing Natural Variation and High-Throughput Phenotyping with PlantCV for Crop Improvement Malia Gehan To tackle the challenge of producing more food and fuel with fewer inputs a variety of strategies to improve and sustain crop yields will need to be explored. These strategies may include: mining natural variation of wild crop relatives to breed crops that require less water; increasing crop temperature tolerance to expand the geographical range in which they grow; and altering the architecture of crops so they can maintain productivity while being grown more densely. These research objectives can be achieved with a variety of methodologies, but they will require both high-throughput DNA sequencing and phenotyping technologies. A major bottleneck in plant science is the ability to efficiently and non-destructively quantify plant traits (phenotypes) through time. PlantCV (http://plantcv.danforthcenter.org/) is an open-source and open development suite of image processing and analysis tools that analyzes images from visible, near-infrared, and fluorescent cameras. Here we present new PlantCV analysis tools available in version 3.0, which includes interactive documentation, color correction, and the development of thermal and hyperspectral imaging tools aimed at the identification of early abiotic stress response. Malia Gehan is an Assistant Member and Principal Investigator at the Donald Danforth Plant Science Center, whose group focuses understanding mechanisms of crop resilience under temperature stress. To study temperature stress and natural variation, the Gehan lab develops high-throughput and high-resolution image-based phenotyping technologies, including low-cost solutions that use Raspberry Pi computers. The Gehan Lab co-develops and maintain the open- source open-development suite of image analysis tools, PlantCV (https://plantcv.danforthcenter.org/). Dr. Gehan was part of the steering committee that helped to form the North American Plant Phenotyping Network and was elected to the board in 2020. Dr. Gehan is interested in increasing communication and connections across phenomics-related disciplines and organizations; using plant phenotyping as a way of increasing student interest in plant science and skills in data science; and democratizing plant phenotyping using open-source hardware and software.
November 9 Routing Optimization in Heterogeneous Wireless Networks for Space and Mission-Driven Internet of Things (IoT) Environments Sara El Alaoui The rising number of vendors and variety in platforms and wireless communication technologies have introduced heterogeneity to networks compromising the efficiency of existing routing algorithms. Through our research on routing in heterogeneous wireless networks for space and Mission-Driven IoT, we show that precise modeling of network heterogeneity properties enables us to enhance network performance in terms of various metrics, such as end-to-end delay and network utilization. By using different tools including machine learning, edge computing, statistical analysis, MADM and age of information, we demonstrate that heterogeneity and the lack of network infrastructure can be overcome paving the way for heterogeneous wireless networks that are highly efficient and dynamic. Sara El Alaoui received her B.S. in Computer Science from Al Akhawayn University in Morocco, and her M.S. and Ph.D. in Computer Science and Engineering from the University of Nebraska-Lincoln, under the supervision of Dr. Byrav Ramamurthy. Her research interests focus on communication networks and routing optimization. She is currently working on optimizing communications in heterogeneous wireless networks with applications in space and IoT environments using various techniques such as Machine Learning and mathematical modeling. Sara has authored several publications in flagship conferences and journals, such as IEEE/ACM Transactions on Networking. She has contributed to "MobilityFirst Project" and The "Next-Phase MobilityFirst," one of NSF's Future Internet Architecture projects.
November 16
November 23 Dependency analysis of noun incorporation in polysynthetic languages Francis Tyers This paper describes an approach to annotating noun incorporation in Universal Dependencies. It motivates the need to annotate this particular morphosyntactic phenomenon and justifies it with respect to frequency of the construction. A case study is presented in which the proposed annotation scheme is applied to a corpus of Chukchi, a highly-endangered language of Siberia that exhibits noun incorporation. We compare argument encoding in Chukchi, English and Russian and find that while in English and Russian discourse elements are primarily tracked through noun phrases and pronouns, in Chukchi they are tracked through agreement marking and incorporation, with a lesser role for noun phrases. Francis M. Tyers is assistant professor in Computational Linguistics at Indiana University, Bloomington. He received his PhD in Computer Science from the Universitat d'Alacant in 2013 and has since worked in Norway at UiT Norgga árktalaš universitehta as a postdoctoral fellow and in Russia at Higher School of Economics in Moscow as an assistant professor. His main research interests are in language technology for indigenous and marginalised language communities. He has worked in a range of areas, including machine translation, morphological analysis, dependency parsing and speech recognition; and with a range of languages and language families. He is president of Apertium, a free/open-source platform for machine translation and a member of the core group of the Universal Dependencies project.