John R. Emmons

Second year computer science PhD student at Stanford University

john emmons headshot

Blog: blog.johnemmons.com
Email: mail@johnemmons.com
Links: Github, Google Scholar, Linkedin
Vitals: CV/Resume


Recent news (January 1, 2018):

My low latency video transmission system, Salsify (code), was just accepted for publication at NSDI'18! Come join my coauthors and me in Renton, Washington on April 9-11th to see the presentation and demo. Also, my ongoing collaboration with Stanford geophysicist Dustin Schoder was recently presented at the American Geological Union (AGU) annual meeting and featured in a Stanford press release! Keep an eye out for our journal submission in the coming months.


Research interests:

My research interests are broadly at the intersection of computer systems and machine learning (specifically computer vision). My current work focuses on building systems that accelerate video compression, transmission, and semantic understanding.

Biographical sketch:

I am a PhD student at Stanford University studying computer science (co-advised by Keith Winstein and Silvio Savarese). During my undergrad, I participated in the engineering dual degree program between Washington University in St. Louis and Drake University. I have an MS degree in computer science and three BS degrees in (1) computer engineering, (2) electrical engineering, and (3) computer science, mathematics, and physics (triple major).

In my spare time, I run long-distance road races, competing mostly in marathons and half marathons. My most recent success was the San Francisco Marathon; I finished in 2:57:47, placing 43rd of 6506 participants and 9th in my age division.

Projects:

Salsify (code, pdf): Salsify is a system for real-time Internet video transmission that achieves 3.9x lower delay and 2.7dB higher visual quality on average when compared with five existing systems: FaceTime, Hangouts, Skype, and WebRTC with and without scalable video coding. Salsify achieves these gains through a joint design of the video codec and transport protocol that features a tighter integration between these components. In our paper (linked above), we describe the design and implementation of Salsify and the series of experiments we performed to quantitatively measure its performance improvement.

AWSLambdaFace (blog, code): Serverless compute platforms such as Amazon Web Services (AWS) Lambda were intended to be used for web microservices and to handle asynchronous events generated by other Amazon web services (DynamoDB, S3, SNS, etc.). However, AWS Lambda also allows users to upload arbitrary Linux binaries along with their lambda functions. In this project, I deployed a full-blown deep convolutional neural network based face recognition tool on AWS Lambda and used the system to query for faces in videos in a massively parallel way.

NoScope (blog, code, pdf): Video data is exploding -- the UK alone has over 400 thousand CCTVs, and YouTube users upload over 300 hours of video every minute. Recent advances in deep learning enable automated analysis of this growing amount of video data, allowing us to query for objects of interest, detect unusual and abnormal events, and sift through lifetimes of video that no human would ever want to watch. However, these deep learning methods are extremely computationally expensive: state-of-the-art methods (as of 2017) for object detection run at ~100 frames per second on a NVIDIA P100 GPU. This is tolerable for small numbers of videos, but it is infeasible for real deployments at scale. In this project, I helped build a system that accelerates the computation of deep CNN based visual queries using ideas borrowed from the database community.

Publications: (bibtex)

Sadjad Fouladi, John Emmons, Emre Orbay, Catherine Wu, Riad Wahby, Keith Winstein. " Salsify: Low-Latency Network Video through Tighter Integration between a Video Codec and a Transport Protocol". NSDI. Mar. 2018. [pdf]

Dustin M. Schroeder, Julian A. Dowdeswell, Emma J. Mackie, Katherine I. Vega, John R. Emmons, Keith Winstein, Robert G. Bingham and Toby J. Benham, "High-Resolution Digitization of the Film Archive of SPRI/NSF/TUD Radar Sounding of the Antarctic Ice Sheet." AGU. Dec. 2017.

Daniel Kang, John Emmons, Firas Abuzaid, Peter Bailis, Matei Zaharia. "Optimizing Deep CNN-Based Queries over Video Streams at Scale". VLDB. Aug. 2017. [pdf]

Hongyi Xin, Sunny Nahar, Richard Zhu, John Emmons, Gennady Pekhimenko, Carl Kingsford, Can Alkan, and Onur Mutlu. "Optimal Seed Solver: Optimizing Seed Selection in Read Mapping". Oxford Bioinformatics, Nov. 2015. [pdf][supp]

Hongyi Xin, John Greth, John Emmons, Gennady Pekhimenko, Carl Kingsford, Can Alkan, and Onur Mutlu. "Shifted Hamming Distance: A Fast and Accurate SIMD-Friendly Filter for Local Alignment in Read Mapping". Oxford Bioinformatics, Dec. 2014. [pdf]

Igor A. Ivanov, Anatoli S. Kheifets, Klaus Bartschat, John Emmons, Sean M. Buczek, Elena V. Gryzlova, and Alexei N. Grum-Grzhimailo. "Displacement effect in strong-field atomic ionization by an XUV pulse". Physical Review Letters A. Oct. 2014. [pdf]

Last updated: Apr 27, 2018