Get to know the researchers
Gain an in-depth understanding of their research and how it connects to our UN SDG commitments.
This year’s research conference includes project presentations of undergraduate students and their research experiences. Hear firsthand as these students share their experiences and discuss the various research opportunities available to undergraduates. Explore a variety of topics inspired by the UN’s Sustainable Development Goals (SDGs). Hear about recent discoveries, cutting-edge research, and emerging technologies in fields such as machine learning algorithms, energy efficiency, bioengineering, educational software services, RSO satellite models, water ice mapping on Mars, and more.
UN SDG 3: Good Health & Well-being
Supervisor: Peter Park
Abstract: Development of a Post-Crash Care Dashboard
Vision Zero is a road safety philosophy that has been implemented into transportation planning across the world. This philosophy aims to diminish vehicle-related fatalities and injuries by strategic planning of various road safety elements. These elements include; Safe Roads, Safe Vehicles, Safe Speeds and Safe People. Presently, Canada's Vision Zero documentation does not address one key element as a part of the road safety strategic plan. This element is the Post-Crash Care (e.g., Emergency Medical Services; EMS) that dispatches first responders to help victims of vehicle crashes by transporting them to emergency medical care centres. The fire fighters also act as first responders as they dispatch fire engines to vehicle crash locations from local fire stations to ensure the fastest response time. The EMS vehicle response time is a critical value that is continuously monitored to determine EMS vehicle response capabilities. This data is used to assess the emergency response capabilities of the existing infrastructure and it can affect, for instance, the need for the development of a new fire station. The focus of this project is to develop an interactive Post-Crash Care Dashboard using ArcGIS Online. The dashboard is developed using the City of Vaughan’s fire incident data that contains dispatching information involved with vehicle crashes and extrications. The dashboard includes elements such as incident maps, and various charts that display incident location, alarm type, time of the incident, travel time zones, and EMS response time percentile statistics. This dashboard is developed to enable visualization of the incident data based on filters applied to the elements of the dashboard in response to user input.
Supervisor: Satinder K. Brar
Abstract: Toxicity Studies of Imipenem & Imipenem Copper Complex on Two Common Wastewater Bacterial Species
The over-consumption of antibiotics over the years has led to their high detection rates in wastewater systems. Further, with co-occurrence of metals and antibiotic residues in wastewater treatment plants (WWTPs), there is a strong possibility of formation of antibiotic metal complexes (AMCs), which are usually more stable and more toxic to microbial communities. Imipenem is a broad-spectrum, relatively new antibiotic, and is used as a last resort for patients with severe bacterial infections. Imipenem can interact with metals, such as copper (II), and affect the microbial communities in these aquatic environments. In this regard, this study compares the effect of imipenem and imipenem- Cu (II) complexes in two bacterial species E. Coli and B. Subtilis. E. coli and B. subtilis are gram-negative and gram-positive bacteria respectively and are commonly detected in WWTPs. Copper was selected because of its high occurrence in WWTPs, moreover, copper is known for its antibacterial properties. The present study determines the minimum inhibitory concentrations (MIC) for Imipenem and Imipenem- Cu (II) (1:1 and 1:2) and compares the colony forming unit (CFU), as a measure of its toxicity. We expect to find the CFU count of Imipenem- Cu (II) to be much less than the CFU count for Imipenem because AMCs tend to be more toxic to bacterial communities. Further, the MIC values will provide insight into how much Imipenem is required to have a severe impact on bacterial species. This study aims to provide clarity on the issue of Imipenem pollution in wastewater systems, and its effects on microbial studies.
Supervisor: Pouya Rezai
Abstract: In-vivo Microfluidic Assays to Investigate Cardiac Functional Impairments and Developmental Abnormalities Associated with SARS-COV-2 ORF3a Protein Expression in Drosophila Larval Heart
In-vivo imaging of Drosophila embryos and larvae requires complete immobilization and orientation for optical accessibility to different organs and cells. Here, a previously developed microfluidic device and a heartbeat quantification software were used to study the effect of SARS-COV2 ORF3a protein expression in the heart of Drosophila larvae. Using the GAL4-UAS expression system, the ORF3a protein was specifically overexpressed in the heart of 3rd instar Drosophila larvae. Larvae were loaded, oriented, and immobilized inside the microfluidic device using loading and orientation glass capillaries, and immobilization microchannels, respectively. Then, heart activity was video-recorded and the heartbeat parameters were measured using a MATLAB-based software. To orient Drosophila embryos, we proposed another microfluidic device capable of out-of-plane rotational manipulation. We trapped bubbles within the microchannel of the device, employing a piezoelectric transducer generating controlled vibrations creating vortices about the bubbles. At the larval stage, the heart-specific overexpression of ORF3a in larvae increased the heart rate (HR) and arrhythmicity (AR) by 17% and 24%, respectively. For embryos, the optimized frequency and peak-to-peak voltages for out-of-plane rotation were found to be at 5 kHz and within 1 to 3.5 V, respectively. Furthermore, for a stable bubble size within the microchannel, we determined that the required internal pressure of the microchannel was ~4.9 kPa, thereby preventing rectified diffusion. The heart-specific overexpression of ORF3a protein significantly increased the HR and AR of Drosophila larvae. Moreover, we have developed an acoustofluidic rotational manipulation technique for high-throughput imaging of embryos.
Supervisor: Hossein Kassiri
Abstract: Software Development for a Wearable Brain EEG Monitoring Device
Electroencephalography (EEG) is known as the best non-invasive method for high-resolution real-time monitoring of brain neural activities. The standard way of conducting EEG recording, however, requires a trained technician to conduct the experiment which involves patient preparation, electrode placement, equipment setup, data collection, and interpretation. Motivated by this, several wireless wearable EEG recording headsets have been developed over the past few years, aiming to provide a fast, low-cost, and medically-relevant alternative to the existing technology, thus achieving long-term ambulatory EEG recording. Successful development of such technology has a significant positive impact on many diagnostics, treatment, rehabilitation, and communication applications. We have created a wearable wireless device in the Integrated Circuits and Systems Lab that can incorporate a large number of recording channels and is intended to be used as a low-cost long-term brain monitoring solution. A patented algorithm in the device enables the early detection of epilepsy seizures. Designing, creating, and testing Windows-based software (or an android app) that communicates with this wearable technology is the major goal of this project. To gather, store, and display the data received from the wearable device, software or an app is needed.
Supervisor: Manos Papagelis
Profile: Nina is a 5th-year software engineering student, She has worked on a risk-based trip recommender model over the previous summer as part of the NSERC USRA 2021 program under Dr. Papagelis which will be continued this summer.
Abstract: Optimal Risk-aware Point-of-Interest (POI) Recommendations during Epidemics
Human mobility plays an essential part in the advancement of a pandemic. As such, travel restrictions and quarantines were used to mitigate it, despite causing social and economic turmoil. Previous studies evaluated recommendation models as tools to replace lock-downs, support decision-making and mitigate infections by recommending the least risky trips and points of interest (POIs). This research aims to expand on these models by using mobility-based risk-aware recommendations to minimize the global risk of infection at POIs. The problem is formulated as a many-to-many assignment linear integer programming problem, where many users can be directed to many POIs. The model receives queries over a short span of time, which consist of sources, search radii and times of departure, and returns sorted top-k POIs for each user, such that the aggregated risk at all POIs is minimized. More specifically, it uses a static risk approximator based on spatiotemporal mobility data to detect expected occupancies at POIs over time. Once the top-k sorted POIs are selected, they are treated with exponentially decaying likelihoods of being chosen by the user. The model takes the likelihoods and treats them as dynamic risks that are added to the static risks of their respective POIs for a median dwell-time period. The model aims to find the combination that minimize these added risks. An extensive evaluation was performed using synthetic queries over an hour in the Toronto region, where the top-k sorted POIs were evaluated against k random selections. Preliminary results demonstrate that the model outperformed sensible baselines. Overall, this data-driven model can be a vital tool in mitigating the risk of infection and encourage responsible behaviours in the community.
Roozbeh Alishahian, Sofia Graci, and Youssef Demashkieh
Program: LURA, LURA, and RAY
Supervisor: Aleksander Czekanski
Profile: Sofia is entering her fourth year of Mechanical Engineering at Lassonde and she is interested in studying the effects of different hydrogel compositions on characteristics of interest such as final print edge quality.
Abstract: Development of Tools for Bioprinting Fidelity Assessment and 3D Printing Bioinks with Internal Structures
Bioinks (biocompatible hydrogels) are used in 3D bioprinting as the base material to produce living tissue. They require certain properties to be suitable for such applications. Biocompatibility, printability, repeatability, ability to support cells and consistency across the print are some of the main criteria bioinks must have to be useful for the fabrication of tissue. Using a mixture of Gelatin (type A), Sodium Alginate and Cellulose Nanofibers (CNF), we synthesized various solutions for bioprinting and then tested the properties of the structures created to gain insights into finding the optimal mixture for bio structure printing.
In this project, we created a series of tools and techniques to assess and quantify the main properties of the printed structures. The mixtures created underwent a FTIR Analysis (Fourier Transform Infrared) to determine the chemical makeup of each mixture alongside a particle injection and distribution study to examine the possible live cell distribution upon the print. Simple linear prints and square structures were photographed under a microscope and then studied using continuous edge detection and other image processing to assess the fidelity of the print. The final complex structures were studied using a Hybrid Rheometer and X-ray Microscopes to determine the mechanical properties and the fidelity of the internal structure. The optimal mixture and the assessment tools created during this research provide other researchers with access to a reliable mixture for specified applications and general tools to quantify the properties of future mixtures for other types of bioprinting
UN SDG 4: Quality Education
Supervisor: Andrew Maxwell
Profile: Audrey is a 4th year student majoring in Economics under the Faculty of Liberal Arts and Professional Studies. Audrey is interested in entrepreneurship, startup culture, design thinking, and User Experience Design.
Abstract: Improving UNHack Design Thinking Process
UNHack is an hackathon program, an experiential learning activity, that runs for 2-3 days annually, typically over a weekend. Students leverage the design thinking mindset through series of design sprint stages to find a solution to a wicked problems that are framed in form of "How Might We" prompts. Each wicked problem underlie one or more than 1 United Nations Sustainability Goal. During the COVID-19 pandemic, UNHack operated in an online setting where students utilized digital tools such as Discord to communicate with mentors, staff, and team members and staff-designed Miro templates to assist them in the design thinking process. As academic institutions transition to post-pandemic period, UNHack is expected to operate either in a hybrid or fully in-person setting. UNHack goes through multiple stages: (1) Opportunity Identification / Discovery Research (2) Problem Definition (3) Ideation (4) Design / Solution Development (5) Business & Market Validation.
Alexandro Salvatore Di Nunzio, Damith Tennakoon
Program: Research Assistants
Supervisor: Mojgan Jadidi
Profiles: Alexandro is a research assistant working on the Virtual Sandbox Development Project under the supervision of Dr. Jadidi. He recently graduated from the Lassonde School of Engineering with a Bachelor of Arts in Digital Media, with a Specialized Honours in Game Arts. He also has a Bachelor of Science in Psychology, which has proven to be pertinent when it comes to the development of systems concerning UI/UX Design and Human-Computer Interaction.
Damith is an undergraduate research assistant for the XR Sandbox Development project at GeoVA Lab. He has a passion to devise, develop and apply high-tech in engineering education. In a world that is constantly evolving, he believes that through the application of physics and engineering, we can steer the spear of innovation towards sustainability and technological advancements.
Abstract: The Augmented and Virtual Reality Sandbox - Teaching Complex Systems through Immersive Environments
In the past, complex Earth systems concepts have always been delivered through pencil and paper. Students are expected to comprehend the 3-dimensional world by reading 2D paper/digital topographic maps, making it difficult to grasp Earth systems concepts. To overcome this, we have adopted the Augmented Reality (AR) Sandbox, initially developed by UC Davis, that uses a depth sensor to create a 2-dimensional topographic map of its sandy terrain, which is then projected, in real-time, onto the sand. This system is well designed to visualize 3D terrain surfaces, however, it cannot display/compute geological, hydrological and man-made structure concepts and computations. Due to these obstacles, we have been developing a virtual application of the AR Sandbox, called the Virtual Sandbox. The Virtual Sandbox is a web-based computer application, developed using the Unity Game Engine, that allows users to perform complex surface terrain model analysis. Users are provided with tools that enable them to measure the horizontal distance between two points, visualize a planar structure with the 3-point problem approach, and generate rivers using a fluid simulation. Further, our team has also begun the development of a fully immersive virtual reality application, called the Virtual Reality (VR) Sandbox, using the Meta Quest 2 VR headset. This application enables students to take virtual tours of well-known locations on Earth such as the Grand Canyon, Swiss National Park, etc. There are tools such as a river generation tool to visualize how rivers once flowed on varying terrains. We are extending this technology for further engineering topics such as circuit board prototyping, drone assembly, and satellite design aimed at providing immersive lab experiences for Lassonde students.
Program: Research Assistant
Supervisor: Andrew Maxwell
Abstract: Creating the Global Classroom: Enhancing the Commercialization of University Research
Creating the Global Classroom: Enhancing the Commercialization of University Research Designing a Master course for students where they come up with their own projects to adopt new technologies and implement them in research files at the university. It is developed to offer a formal approach and novel frameworks that allows the student to comprehend the complications around improving the value of their technologies, overcoming barriers to adoption, and recognizing alternate strategies for market success.
Supervisor: Maleknaz Nayebi
Abstract: Expertise Aware Content Customization
Readers’ expertise directly impacts their understanding of content, yet everyone receives the same manual for products. In software systems, APIs are the interfaces that offer services to other systems and can connect software systems together. With the adaptation of machine learning in recent years across different industries, many machine learning libraries are written and used by practitioners from different sectors to bring intelligence into their domain of practice. Yet, these documents are written by software professionals and are highly technical. Thus, there is an abundance of tutorials, courses, and educational resources and the Q/A platforms are full of redundant mistakes reported by the users. In this study, we focused on the popular ML library, TensorFlow, and aimed at answering the question “What model and tool can translate API documentation based on user expertise in Machine learning and software engineering?”. We first developed a method called “ELSU” to identify the expertise of the user and then curate the content to their expertise on multiple dimensions of the readability and content. ELSU uses Docstring (a descriptive string literal serving a similar purpose as a descriptive comment) to attain TensorFlow API descriptions. Then, a machine learning process is performed using the existing tutorials to train the model and predict the level of documentation customized and strung together with external resources. Eventually, the mechanism can be applied to various facets of software not exclusive to TensorFlow. Furthermore, the developed project can expand to other contexts to help develop resilient infrastructure and educational access (UN Sustainability Goal 9.5 and 4.6 respectively). The project exists on a website http://tf-doc.suhasiddiqui.net/.
Supervisor: Alvine Boaye Belle
Abstract: Assessing the Systematicity of Software Engineering Reviews Self-Identifying as Systematic
Through Systematic reviews, people can synthesize the state of knowledge corresponding to the research question and understand the associations among exposures and outcomes. Normally, explicit, reproducible and systematic methods are used for decreasing the potential bias which may occur during conducting a review. Moreover, if a systematic review is conducted properly, it will come out reliable outcome, then conclusions can be extracted, and decision can be indicated. Therefore, systematic reviews become popular and crucial in evidence-based health care and more area including psychology, education, sociology, etc. However, most of systematic reviews display with low methodological and quality. In this case, it is important to follow the reporting guidelines to guarantee the review’s transparent, complete, trustworthy, reproducible and unbiased, etc. Nevertheless, systematic reviews mostly do not abide by the existing guidelines that probably lower the quality of reviews and result negative effect including lack methodological rigor, yield low-credible findings and may mislead decision-makers. Our research is trying to explore a quantifying measurement in Computer Engineering area for users to evaluate the degree of the review’s systematic and foster the objective and consistent comparison among reviews meanwhile. We evaluated 151 reviews published in the software engineering field from Google Scholar, IEEE and ACM, which showed that a relative suboptimal systematicity i.e. methodological rigor and reporting quality.
UN SDG 6: Clean Water & Sanitation
Program: Mitacs Globalink
Supervisor: Rashid Bashir
Profile: Zehra is a third-year undergraduate student of Civil Engineering at Aligarh Muslim University, Aligarh. As a research intern under the Mitacs Globalink Research Internship (GRI) program, Zehra is excited to explore groundwater hydrology in the context of climate change.
Abstract: Modeling the Effect of Climate Change on Groundwater Recharge
Groundwater is an important ecological resource. It is relied upon by many as a source of fresh water for drinking, irrigation and industrial purposes. About 30% of Canadians rely on groundwater for domestic use. Groundwater recharge is the flow of water through the vadose zone into the underlying aquifer. Recharge rates vary from location to location as recharge depends on the soil type as well as the frequency, duration, and intensity of precipitation. The climate is constantly changing and therefore it is crucial to estimate how groundwater recharge rates will be altered in response to climate change.
This research effort aims to identify the climatic factors that affect groundwater recharge rates; and to quantify groundwater recharge rates for the future. To accomplish this, historical and future climate data for various locations across Ontario were compiled over a total of 120 years, using the available climate data records as well as four different General Circulation Models (GCMs). Groundwater recharge was simulated using a variably saturated flow model with a soil-atmosphere boundary condition. The simulations were carried out using HYDRUS-1D. The output data was then processed to obtain water balance at the ground surface and the amount of recharge. The results of this research suggest that the impact on groundwater recharge due to climate change will be location-specific and is a function of the hydraulic properties of the vadose zone.
Program: Mitacs Globalink
Supervisor: Cuiying Jian
Profile: Ghuncha is a Mechanical Engineering undergraduate student whose research interests involve Machine Learning and Materials Science. She aims to use her learning to develop novel solutions to crucial problems that can have a global impact.
Abstract: Machine Learning Guided Filter Design to Alleviate Heavy Metal Pollution
According to a recent UN study, 80% of wastewater flows back into the ecosystem without being treated. Although acceptable in low quantities, chemical contaminants such as heavy metals present in wastewater may find their way into the food chain and cause chronic or carcinogenic diseases. Thus it is crucial to treat wastewater before releasing it into the environment. To solve this, we propose the use of graphene-based filters. As opposed to conventional membrane filters, which suffer flux decline over time and have a low tolerance to acid/alkaline conditions, graphene-based filters have demonstrated tremendous robustness and high fluxes in wastewater remedies. The aim is to accelerate the material design of graphene-based filters through a combined approach of machine learning (ML) with classical molecular dynamics (MD). A state-of-the-art ML algorithm, GDyNet (based on graph-neural networks), is deployed to automate data mining and detect patterns that may otherwise go unnoticed by human eyes due to vast data produced by MD simulations. This is achieved by converting the trajectory data of metal ions in water to a graph with nodes and edges and updating it as the system evolves. This graph is then fed to the GDyNet model to learn motion and predict the probability of different trajectory patterns of metal ions in the MD simulations. We can thus identify the evolution of ions across filter pores during the filtration process. Thereby it sets a guideline for the optimal design of graphene-based filters by modifying material structures, focusing primarily on functional groups attached to the filter pores. Upon application, we anticipate filters delivering high performance with low energy inputs and considerably reduced maintenance costs.
John Li Chen Hok
Supervisor: Stephanie Gora
Abstract: Lead in Decentralised Drinking Water System
Lead is a highly toxic heavy metal that can cause adverse neurological effects in children and renal dysfunction in adults. Decentralised drinking water systems (DDWS) include any systems that are not supplied by a centralised municipal water distribution network. Lead has a maximum acceptable concentration (MAC) of 5 ppb (µg/L) as defined by Health Canada, but should be kept as low as reasonably achievable (ALARA) as there are no safe exposure levels. Many sampling methods that consider variables such as overnight stagnation or water usage patterns have been developed to assess lead exposure levels, but there is no specific recommended method in Canada for DDWS. Unlike in municipal systems, in DDWS, lead usually originates from premise plumbing such as fittings, solder and faucets, and its release is influenced by many factors such as the water chemistry. Our research team partnered with the Nova Scotia Department of Natural Resources and Renewables to conduct a study on two potentially problematic provincial park campgrounds in Nova Scotia using water quality data from previous years and new data collected in the field by the research team. The objective was to identify the source of lead and find potential solutions. Water samples were collected using methods such as the 6 hour stagnation (6HS). The results of water analysis showed that the men’s bathroom at park 1 was the most problematic, with high levels of copper, zinc and lead, suggesting corrosion of brass plumbing components. Additionally, the sampling methods were compared to each other to find out which ones were redundant. Making sure that water in DDWS is safe for consumption is essential to protect public health and aligns with the UN's Clear Water and Sanitation goal.
Program: Research Assistant
Supervisor: Marina Freire-Gormaly
Abstract: Rapid Groundwater Potential Mapping of the Ban Saka area of Laos using GIS and RS tools and Analytic hierarchy process (AHP) techniques
Over 40% of the global population does not have access to sufficient clean water. By 2025, 1.8 billion people will be living in countries or regions with absolute water scarcity, according to UN-Water. Laos is a nation with plentiful surface water and broad rivers, but there is little infrastructure to make that water clean and accessible outside of cities. Daily use and emergency responses in humanitarian contexts require a rapid setup of water supply. Boreholes are often drilled where the needs are highest and not where hydrogeological conditions are most favorable. The Rapid Groundwater Potential Mapping (RGWPM) methodology was therefore developed as a practical tool to support borehole siting when time is critical, allowing strategic planning of geophysical campaigns. RGWPM is based on the combined analysis of satellite images, digital elevation models, rainfall, and geological maps, obtained through spatial overlay of some hydrogeological variables controlling groundwater potential: Drainage Density, Rainfall, Slope, Soil type, Geology, Lineament density, Land use and land cover (LULC). Drainage Density controls the runoff distribution and infiltration rates, Rainfall is the major source of water, LULC affects the recharge processes, Slope drives the water flow energy, Soil type governs the infiltration rates, Geology controls infiltration, movement, and storage of water and Lineament density determines the hydraulic conductivity. In this work, RGWPM maps were produced performing a study using Multiple Criteria Decision-Making methods. Analytic hierarchy process (AHP) techniques were used for delineation of Groundwater potential zoning of the Ban Saka area in the Phonghong district of Laos using GIS and Remote Sensing techniques through a weighted overlay of these seven factors with the overall groundwater potential (GWP) characterized as ‘very low’, ‘low’, ‘moderate’, and ‘high’, with each zone associated to a specific water supply option. As groundwater is the most reliable source of drinking water, in Laos RGWPM is used to provide a safe source of potable water.
Supervisor: Magdalena Krol
Abstract: Visualizing Subsurface Contaminant Transport with Web-based Course Tools
Supervisor: Stephanie Gora
Abstract: 3D Printed Proportional Water Sampling Device, to Help Detect Chemical Contaminants in Decentralized Drinking Water Systems
Each year there are countless health issues associated with contaminated drinking water. In an effort to combat these issues, research into a non-invasive 3D printed water sampling device began. There are many different types of water sampling that can be done, however, proportional sampling is the method chosen for this research due to a distinct advantage which is, researchers can then see how much chemical contaminants consumers are exposed to over a set time frame. Having a 3D printed sampling device would make it easier to do proportional sampling, because this device was built in a modular way it can be attached to any drinking water system to easily provide quality water samples. The modular design of the sampling device also allows for specific components to be easily replaced if needed, thus, making the device user-friendly with a wide variety of applications due to the versatility of the overall design.
UN SDG 7: Affordable & Clean Energy
Akshay Karthiyayini Kelappan, Niloy Sen and Sheel Bhadra
Program: Mitacs Globalink
Supervisor: Paul G. O'Brien
Abstract: Design and Demonstration of a Photovoltaic-thermal Solar Water Heating System
High temperatures have a detrimental effect on the performance of photovoltaic (PV) cells because of their negative temperature coefficients. The performance of PV cells can be enhanced by keeping PV cells cool while they are exposed to sunlight. This study proposes a novel PV-water-Trombe wall, wherein water is used as a thermal energy storage medium. Hot water and air at the top of the Trombe wall can be used for indoor air and water heating in buildings. Further, PV cells are placed at the bottom half of the Trombe wall, where the relatively cold water at the bottom of the thermal energy storage medium keeps them cool. Moreover, tinted acrylic sheets can be placed in the water wall to increase the amount of solar thermal energy generated and stored in the water. Using tinted sheets also allows for the Trombe wall to be semi-transparent, which can be desirable for improved building aesthetics. In this work, a PV-water-Trombe wall prototype (with and without an acrylic tinted sheet) was tested under solar-simulated light for its ability to generate both electricity and heat simultaneously. The temperature profile of the experimental setup was tracked using thermocouples placed at various places, and the output power from the PV cells was measured. CFD simulations will also be performed and the numerical results will be compared with the experimental results. The results demonstrate that the Trombe wall prototype can provide air up to 70 °C and water above 40 °C while keeping the temperature of the PV cell at lower values. The experimental results attained thus far suggest that the PV-water-Trombe walls can improve building energy performance by generating electric power and by reducing building heating loads.
Program: Mitacs Globalink
Supervisor: Ahmed Elkholy and Roger Kempers
Profile: Jack is a final year aerospace engineering student at Durham University (UK) currently undertaking the Mitacs Globalink Programme. Jack is passionate about working on projects related to heat transfer.
Abstract: Spherical Re-entrant Cavities for Pool Boiling Enhancement using SLM
Pool boiling cooling systems have been widely applied in several industries, such as immersive cooling in electronics and aerospace, owing to their high heat transfer coefficient (HTC) and increased critical heat flux (CHF). The current work proposes using the laser powder bed fusion (LPBF) technique to fabricate spherical re-entrant cavities that are not amenable to fabrication using conventional machining. Re-entrant cavities are spherical cavities embedded within the sample with an opening (pore) to the boiling surface. This opening has a diameter less than the cavity diameter, which helps trap the vapor and promotes nucleation activity. Using Novec 7100/ethanol at saturated conditions, the CHF and HTC were tested using a world-leading pool boiling apparatus. It is expected that the porous nature of the LPBF facilitates the bubble generation through the pores and wicks the replenishment liquid to the hot spots through the internal arteries, which would increase the CHF and HTC simultaneously. The bubble dynamics will be investigated using a high-speed camera to understand the underlying heat transfer enhancement mechanism.
Mahmoud Al Akchar
Supervisor: Roger Kempers
Profile: Mahmoud is interested in thermal applications and cooling. His capstone project was on exhaust heat recycling of trucks to improve fuel efficiency under the supervision of Dr. Kempers. He decided to continue his research with Dr. Kempers as he plans to pursue a Master’s degree in the field.
Abstract: Single-Phase Grip Metal Cold Plate Technology for IGBT Modules Cooling Applications
This work investigates a Grip Metal cold plate, a new type of cold plate hooks technology against other more common cold plate technologies for IGBT cooling. Insulated Gate Bipolar Transistor (IGBT) modules are used in electric power converters and inverters. One major mode of failure is thermal stress. High temperature causes packaging and cable failure. Active and passive heating and cooling of the IGBT junctions cause temperature and thermal cycling. The long-term degradation will cause sudden breakoffs and downtimes which require unscheduled maintenance. Therefore, advancement in IGBT cooling is necessary to save cost, and time, and push the hardware to its maximum efficiencies. A skiving process introduced by NUCAP, manufacturing an array of hook-shaped to increase surface area. To improve the cooling performance of cold plates, this new technology will facilitate fluid mixing and promote boundary layer separation. A single-phase testing apparatus is built to evaluate the cold plates under variable flow rates. The heat resistance of different cold plates will be calculated and compared. It is expected that the Grip Metal cold plate will have a lower heat resistance than the others and therefore a better thermal cooling performance.
Supervisor: John Lam
Profile: Maria is currently working towards her degree in electrical engineering and is working on GaN based bidirectional power interface. Her research interests include power optimization, renewable energy systems, and environmental bio-energy harvesting.
Abstract: Gallium-Nitride Based Multi-MHz Bidirectional Power Interface for Integrated Energy Storage in a DC Microgrid
The Canadian data center and wireless communication market currently worth more than $5 billion and is forecasted to grow over 10% every year. Using renewable energy sources for powering data centers and servers can provide a clean solution to manage the overall electricity consumption. Since most of the electrical components in a data center, such as batteries for providing back-up power, require DC power, high voltage DC distribution is an emerging power architecture. Existing power interface that converts the surplus energy captured from the renewable source to the storage element (e.g. a battery) consists of two conversion stages: one stage interfaces with the renewable source and the second stage interfaces with the storage element. This approach results in low efficiency and high system cost. This project is to develop a single-stage bi-directional AC/DC converter that stores extra extracted power or deliver required power directly, unlike the conventional design approach. The proposed power interface operates in two different modes: step-down (called buck mode) to store energy and step-up (called boost mode) to deliver energy from the battery. To reduce the size of the proposed circuit, Gallium Nitride switching devices with very small footprints are used in the devised circuit. To minimize the size of the magnetic components in the circuit, the proposed circuit operates at a very high operating frequency in MHz range. Simulation and theoretical analysis on the proposed circuit with a rated power of 300W are performed in Powersim and MATLAB to study the converter’s operating principles and power losses. Results anticipated from this work is that the power conversion efficiency will increase by at least 5 – 10% compared to the existing design.
Supervisor: Roger Kempers
Profile: Walid is pursuing a mechanical engineering degree at Lassonde. His research revolves around the topology optimization of heat sinks. He will be applying this technology to the heat transfer fluid flow field
Abstract: Topology Optimization of Liquid Cooled Heatsinks
Topology optimization has been gaining popularity in creating non-intuitive designs for managing thermal heatsinks; these designs are difficult to create with traditional optimization methods. Three-dimensional topology optimization is computationally expensive; therefore, various techniques have been used to simplify this into a two-dimensional problem. In this work, topology optimization of a 2D water-cooled heat sink is carried out using the density-based approach. The energy and flow fields are solved using COMSOL Multiphysics. The optimization objective is increasing the mean temperature of the fluid leaving the heat sink at a given heat flux and pressure drop. Globally Convergent Method of Moving Asymptotes (GCMMA) built in COMSOL was employed to reach the optimization solution. COMSOL’s built-in adjoint solver and sensitivity analysis are used to guide the optimization. It is found that the prescribed tuning parameters significantly influence the obtained design which requires further investigation. Also, several parameters, such as the heat flux, pressure drop, and design domain size will also be investigated.
Supervisor: Kamelia Atefi Monfared
Profile: Jaiden is a 2nd year Civil Engineering student. as a RAY researcher, he is exploring his interests in geothermal energy by working with Prof. Monafred.
Abstract: Geothermal Energy Piles
Geothermal energy piles are dual structural elements where in addition to the load bearing role as the foundation, they can be utilized to provide clean heating or cooling energy using the sustainable geothermal energy. During construction of energy pile foundations, pipes normally made of high density polyethylene are added to the interior of the foundation. These pipes carry a working liquid that act as a heating exchanger, heating up or cooling down the pile, thus cooling down or warming up respectively. Through this system heating or cooling can be provided through green renewable energy. There are still knowledge gaps regarding the optimum thermo-mechanical performance of energy piles for different environments and ground conditions. This project evaluates the use of energy piles for the Canadian environment. In this project we develop a numerical model using the COMSOL Multiphysics to replicate concrete cylinders of various dimensions, soil types and various configurations for the tubing system inside the pile. One key challenge found is the efficiency of this system. The findings will help determine the optimum configuration for the energy pile for the Canadian Environment.
Supervisor: Roger Kempers
Profile: Rehan is a 4th-year Mechanical Engineering student. His passion for research stemmed from his curiosity about the unknown, and this has pushed him to explore different fields of engineering.
Abstract: Experimental Study and Performance Characterization of High-Performing Microchannel Heat Exchangers for Hypersonic Flow
Microchannel technology can be incorporated into heat exchanger designs, for improved thermal performance and to accommodate mass and volume reduction for space hardware applications. This study details the design, fabrication, and commissioning of an experimental test loop used to characterize the thermal and hydraulic performance of the microchannel heat exchangers used in electric vehicle (EV) battery management, electronics cooling, and waste heat recovery. The experimental test loop used to characterize the microchannel heat exchanger was designed with a regulator, a 10-micron particle air filter, a flow rotameter, a variable alternating current (AC) transformer coupled with a heater, as well as the microchannel plate heat exchanger. Pressure drops were measured from differential pressure transducers at the respective cold side and hot side of the heat exchanger. The temperature was logged using the logarithmic mean temperature difference (LMTD) across the ports of the heat exchanger. These values alongside the input power were used to characterize the performance of the microchannel heat exchanger. The primary fluid used in this investigation was nitrogen (N2), however initial test runs were conducted with compressed air. The microchannel heat exchanger used for our experiment is made from stainless-teel and manufactured through diffusion bonding and chemically etching processes. This research study provides a better method to characterize microchannel heat exchangers and allows us to determine the heat exchanger efficiency, heat exchanger effectiveness, fin efficiency, as well as the overall heat transfer rate.
Program: Mitacs Globalink
Supervisor: Satinder Kaur Brar
Profile: Bridget is a visiting researcher working in Dr. Brar's lab through the Fulbright Canada Mitacs Globalink Program. She is a fourth-year student at Oregon State University double majoring in chemical engineering and bioresource research. Her research interests are in microbiology and engineering, particularly in biofuel production.
Abstract: Fed-batch Cultivation as a Bioprocess Tool to Increase Lipid Accumulation and Carotenoid production by Rhodosporidim toruloides
Microbial lipids are a promising alternative oil feedstock for the production of biofuels, food additives, pharmaceuticals, and chemicals. In batch cultures, the oleaginous yeast Rhodosporidim toruloides is capable of accumulating up to 70% of its dry cell weight as lipids. In addition to microbial lipid accumulation, R. torulides has the capacity to produce valuable compounds like carotenoids. R. toruloides was cultivated under fed-batch conditions as a tool to increase lipid accumulation and lipid productivity. Fed-batch cultures were run for 168 h at a bench-scale bioreactor using synthetic media. Maintaining glucose concentration at 10 g/L every 24 h, a total biomass and lipid content of 14.15 g/L and 42.4% (w/w) were obtained respectively. The highest biomass productivity was observed at 32 h while lipid productivity was at 56 h. Finally, the fatty acids present were stearic, palmitic, and oleic fatty acids. This study show the fed-batch fermentation process is an excellent tool to improve lipid accumulation and carotenoid production using R. toruloides.
Program: Mitacs Globalink
Supervisor: Cuiying Jian
Profile: Himanshu is a third year undergraduate student from the Department of Mechanical Engineering at Indian Institute of Technology, Kanpur. Himanshu is working with supervisor Prof. Cuiying Jian on the application of Gryffin Algorithm and aims to be an expert in the field of Machine learning.
Abstract: Application of Gryffin in the Design of Solar Cells
The data-driven design strategies for autonomous experimentation have mostly concentrated on continuous process parameters, despite the urgent need to provide effective techniques for selecting categorical variables. For instance, to design hybrid organic-inorganic perovskites for light harvesting, exploring categorical variables, i.e., organic molecules, halide anions, and cations, is of great importance to determine the optimal compositions for the required bandgap. This research employs the Gryffin algorithm to search categorical variables and further pinpoint materials candidates that can optimize the requested bandgap. The Gryffin algorithm augments a Bayesian optimisation by using kernel density estimation directly on the categorical variables. In addition, it can rank the importance of the properties of each categorical variable by converting properties into descriptors. The Gryffin algorithm was first used to explore hybrid organic-inorganic perovskites to search materials candidates with the lowest bandgap. The descriptor generation process was modified to take various properties into considerations, including electron affinity, mass, and electronegativity for inorganic cations and anions, and energy levels, dipole moment, and molecular weights for organic compounds. It was found that molecular weights have the most relevance for organic molecules, whereas electronegativity has the most relevance for cations and anions. Following this, the Gryffin algorithm was also employed to explore inorganic solids of diverse elements and proportions to aid the design of inorganic solar cells. Properties of each element, such as atomic numbers, masses, and group numbers, will be ranked in relation to bandgaps. It is expected that key information on property importance can provide crucial knowledge and insights to guide the development of photovoltaic devices.
Program: Mitacs Globalink
Supervisor: Amir Asif
Abstract: Cost Optimization Model of Electric Vehicles Parking Lots for Distribution System Operator
During the past few years, greenhouse gas emissions increased exponentially. According to the International Energy Agency report, 32.3 Giga metric tons of CO2 had been emitted in 2016, where the transportation sector stands among the major contributors. Hence, the penetration of Electric Vehicles is considered the prime action toward a sustainable energy environment. The growth and accessibility of charging infrastructure to meet the charging demands of EV owners are vital for the consistent adoption of EVs. A considerable need for EV charging comes from EVs parked in commercial and workspaces. Additionally, the energy demand will likely increase by 56% in 30 years starting in 2010 due to economic development, according to the US Energy Information Administration (EIA) assessment. Regardless, the grid is designed initially to bear lower loads. Thus upon the introduction of this extra load, the grid requires some additional infrastructure and research to protect it from potential threats. The Vehicle-to-Grid solution is currently under research to solve this problem.
The proposed model offers a strategy for optimizing the distribution system operator's profit while considering EVPL owner's yields and EV user satisfaction. Additionally, the model decides on the optimal combination of uni-directional (UD) and bi-directional (BD) chargers, considering all the financial limitations of the vehicle to grid (V2G) services and the incentive-participation modeling of EV users.
Supervisor: Roger Kempers
Profile: Rayyan is interested in finding and studying ways to improve current technologies in Mechanical Engineering and enjoys experimentation.
Abstract: Development of 3D-printed wick structures
Two-phase cooling devices (i.e. vapour chambers (VCs) and heat pipes (Hps)) have recently gained attention in the microelectronic industry due to their ability to meet the increasingly higher heat flux requirement as compared to traditional single-phase heatsinks. The wick is an essential component HPs and VCs, which drives the working liquid from the condenser side (cold side) back to the evaporator (hot side) through its capillaries. Therefore, the wick should be engineered to have higher permeability (K) and a smaller effective pore radius (reff). As a result, various techniques and manufacturing methods, such as etching, sintering, machining, etc., have been developed to balance the trade-off between both parameters. The current paper proposes using laser bed powder fusion (LBPF) technology to tailor complex porous wicking structures. Several strut-based unit cells (BCC, FCC, FL, etc.) structures were developed and investigated at different porosity levels. The hydraulic performance parameters of the wicks were tested using the rate-of-rise (m-t) method. It was found that the additively manufactured wicks showed a higher K/reff compared with the conventional sintered wicks.
UN SDG 9: Innovation in 3D Printing & Smart Materials
Steeve-Johan Otoka-Eyota, Yuyu Ren
Supervisor: Solomon Boakye-Yiadom
Abstract: Defect Detection During Metal 3D printing Using Supervised Machine Learning
Metal-based additive manufacturing (Metal AM), also known as Metal 3D printing, is the process of making 3D objects by building multiple layers of fine metal powder. There are different types of Metal AM, but our research focuses on the Laser-Powder-Bed Fusion (LPBF) technique. The main advantage of using LPBF Metal AM is its ability to create flexible, lightweight structures with complex geometric shapes in a shorter time. However, the produced parts contain defects and discontinuities caused by many factors. The lack of performance data, improvement standards and consensus on the fabricated part properties has impeded the application of Metal AM in the industry. Moreover, the absence of fundamental data required for developing sophisticated artificial intelligence and machine learning algorithms has prevented realizing this goal. The first objective of this research is to monitor and track the melt pool morphological evolution, the ejection spatters behavior, and the un-melted and over-melted regions during the rapid processing of parts. This involves the collection and best pre-processing techniques of multi-sensor data for accurate dimensionality reduction. The main goal is to develop advanced machine learning algorithms that can predict defect generation during Metal AM. In the interest of gathering data, we have recorded an experiment using a high-speed camera and developed two different applications in Java and Python, generating two sets of data. Using a multitude of image processing techniques, we achieved generating data by studying the melt pool and the ejection spatters. The data generated are automatically stored in a file, ready for cleaning and developing new algorithms to detect defects in metal 3D printing and ultimately improve the printing quality.
Program: Mitacs Globalink
Supervisor: Reza Rizvi
Profile: Yassine is a Mitacs Globalink student from Tunisia, working in Lassonde under the supervision of professor Reza Rizvi. Yassine's specialty is computer system engineering.
Abstract: Performing Hardness Test Using Fully Automated Three-Dimensional Printer
Several methods and tests have been conducted or carried out to investigate the mechanical behavior of polymers. These tests require gathering a huge amount of data with good precision. Accuracy, time, and money constraints differ from one test to another. These constraints can easily limit the progress of a project in any field. The hardness test, for example, is considered the most suitable test among other tests in terms of these constraints mentioned. Hardness is defined as the resistance of a material to penetration, and the majority of commercial hardness meters force a small penetrator (indenter) into the metal by means of an applied load. Experiments will be done on thin and small material and the appropriate indenter needs to be built taking into consideration its shape. Since 3D printing and rapid prototypic has a greater impact on modern manufacturing practices, the objective of this project is to have a fully automated three-dimension printer that can perform printing as well as hardness test. Therefore, the microcontroller used in this research work is Mega Arduino and it will be responsible for controlling 5 stepper motors: x-axis, y-axis, z-axis, indenter, and nozzle. Four of these motors have already been installed in the 3d printer, the fifth motor is a linear actuator that will control the indenter to go up and down.
Supervisor: Roger Kempers
Profile: Madison is a 5th-year mechanical engineering student and this is her second time participating in the Lassonde Undergraduate Summer Research Conference. She had an amazing experience last year at the conference and wanted to continue her heat transfer-related research again this summer. She chose research because she is enthusiastic about getting hands-on experience in her field, and further developing her engineering knowledge.
Abstract: Two-Phase Loop Thermosyphon with Porous Evaporators Fabricated by Laser Powder Bed Fusion
Single-phase cooling methods have become insufficient for the high heat fluxes generated by increasingly miniaturized electronic components. Two-phase loop thermosyphons provide an attractive solution for transferring heat at relatively small temperature differences between consecutive boiling and condensation stages. The two-phase loop thermosyphon in the present study consists of an additively manufactured (AM) side-heated evaporator and plate heat exchanger condenser connected by transparent nylon tubing. Input thermal power converts the working fluid to vapor in the evaporator. The vapor flows upward through the riser tube to the condenser where it is condensed by a cooling loop. The condensed fluid returns to the evaporator under gravity in the downcomer tube. There have been extensive previous studies to enhance thermosyphon performance by modifying the evaporator surface topology through techniques such as powder sintering, coating, milling, etc., but not AM methods such as Laser Powder Bed Fusion (LPBF). The LPBF process produces small-scale surface deformations that work as stable nucleation locations for boiling and assist liquid replenishment. In the present study, three evaporators with different porosities were manufactured by LPBF and were investigated for their effect on thermal resistance at varying fill ratios and input powers. Computer simulations are used to analyze the conductive and convective heat transfer mechanisms within the evaporator. It is anticipated that this modification will enhance the loop performance and mitigate temperature instabilities. Electronics, solar, air conditioning, and avionics systems and applications will benefit from these enhanced cooling technologies, which will improve their overall performance and safety.
Supervisor: Cuiying Jian
Abstract: Fracture Point Detection and Prediction in Graphene through Machine Learning Algorithms
Among all 2-D materials, graphene has received enormous research attention in the last decade. It is the strongest material of all known to mankind. Due to the material being relatively new, it is vital that we research further about its material properties and harness its full potential which ranges from wastewater treatment to material for space exploration. The purpose of this research is to dive deep into the characteristic behavior of graphene but with a faster and cost-efficient approach using machine learning (ML) algorithms. The primary objective is to detect fracture points in the sheet of graphene with defect ratios (1% to 10%), stretched under different trajectories. The trajectories consist of a large dataset signifying the direction of the stretch and the time-dependent coordinates of all the atoms undergoing an applied strain rate of 0.001m/s at 300K, obtained from molecular dynamics simulations. This detection can be potentially achieved by an anomaly detection algorithm that emphasizes on detecting outliers in the given trajectory data at every time step. The goal is to identify the abnormality and pinpoint the earliest occurrence to detect the first fracture point. The secondary objective is to input the fracture point detection data into a neural network and predict the fracture points for new graphene samples with other defect ratios. To fulfil this goal, two different neural networks from research papers were referred to validate our conclusions of young’s modulus and fracture stress under the above-mentioned temperature and strain rate. The analyzed code and derived observations provided a deep understanding of the architecture of neural networks and helped to lay the groundwork for our neural network. The future of graphene is promising. This innovative approach of integrating ML to investigate more about materials can deepen our understanding and accelerate the materials design process.
Supervisor: Roger Kempers
Abstract: Characterization OF Interface Materials
My research is to aid in a project involving the characterization of interface materials. The device will take the measurements of thermal resistance, effective thermal conductivity, and electrical resistance as a function of pressure and the thickness of the material. As electronic devices become more compact and the demand for their sophistication increases, it is more difficult to reduce their operating temperature, which results in a shorter lifespan of the devices. TIMs (Thermal interface materials) eliminate the void between interface areas of electronic components in contact to maximize heat transfer and dissipation. The precision of current TIM testers is comparatively less sophisticated, so there exists a need for a more sophisticated testing device that can accurately measure the quality of TIMs.
Supervisor: Isaac B. Smith
Profile: Vennesa is a 4th-year Computer Science student who is interested in computational modeling, machine learning, and Mars.
Abstract: Seasonal variability of the Southern Polar Layered Deposits of Mars during surface observations and numerical simulations
The southern polar layered deposits (SPLD) of Mars are series of troughs, 3-4 km deep, consisting primarily of water ice and dust and formed from erosion and deposition of ice and dust. Seasonally, during southern spring, fast moving surface winds known as katabatic winds are accelerated downwards into the trough and erode the surface material in the process. Near the bottom of the trough, the katabatic wind layer rapidly expands, known as a katabatic jump, and optically thick, low-altitude clouds are formed because of significant changes in the local pressure. These clouds snow the eroded material back onto the surface and show the movement of surface material throughout the polar region.
For the first part project, I analyzed ~5000 visual band images from the Thermal Emission Imaging System (THEMIS) to statistically determined the seasonal variability of surface clouds for several Martian seasons. THEMIS has a narrow field of view and can easily resolve clouds on 2-3 km scale and has coverage from early spring to late summer. My analysis match that of previously published observations from Professor Isaac Smith, and I was also able to update our current database of trough clouds by including the most recent Martian season. I found that the katabatic clouds are most common during mid-to-late spring, when the katabatic winds are at their strongest and quickly die out towards the end of the spring.
For the second part of the project, I converted a hydraulic jump MATLAB model to a Python. Hydraulic jumps are a terrestrial phenomenon that are similar to the Martian katabatic jumps that are responsible for trough formation. The model was modified to work with a 3-dimensional rotating reference frame and with Martian conditions, allowing us to investigate near-surface polar aeolian processes which act to influence polar topographical change.
The resultant model will also contribute to the development of a wider computational mesoscale model of the Martian atmosphere as this model can quickly test new physics compared to computational heavy models.
Supervisor: Regina Lee
Profile: Mark completed his BCom at York University in 2021 and entered the Physics and Astronomy (Space Science) program the following Fall term. He is researching RSO light curve analysis and characterization at the Nanosatellite Research Laboratory. His interests include RPGs, Sci-Fi novels, and board games.
Abstract: Improving the Satellite Facet Modelling Process for More Accurate Simulated Light Curve Analysis
Identification of Resident Space Objects (RSOs) from the ground is a crucial step to properly forecasting conditions in Earth orbit. One method being explored is identifying an RSO from a plot of its brightness variation over time (referred to as its light curve), which is directly related to its geometry, surface properties, orbit, and attitude. It is currently not feasible to train an Artificial Intelligence to characterize or identify an RSO from the light curve alone, as there is a limited set of labeled training data, so the need has arisen for simulated light curves to fill the gap. A current bottleneck to this process is the time it takes to create each 3D model for simulation (referred to as a facet model) – the process is complicated and limited in scale, taking 2-5 hours of effort depending on the RSO. In this research we have developed a GUI-based application to improve the efficiency of the facet modeling process and expand its capabilities, bringing the modeling time of an RSO to an hour or less so that the labeled dataset can be created. With the added functionality and intuitive interface provided by the application, it is now possible to create many more accurate facet models for simulation, and the process can be worked on in parallel by several researchers with minimal time needed for training. Satellites like Intelsat 10-02, Jason-2, and Radarsat-2 have been modeled using this technique, producing a variety of facet models with their solar panels tilted at various angles to provide a diverse range of light curves for each one.
Supervisor: Isaac B Smith
Profile: As a Physics & Astronomy student, Matias has always been passionate about the Universe, the stars, and specifically our neighboring planets. His research focus is on mapping the layers of the north polar cap of Mars, using radar-sounding data from the Shallow Radar instrument.
Abstract: Interpreting Layers of the North Polar Ice Cap of Mars for a Climate Signal
The North Polar Region of Mars is characterized by spiral troughs in the morphology and stratigraphy of its surface. Part of what the Planum Boreum consists of is the NPLD or North Polar Layer deposits, a combination of regions of sand, dust, and ice which are the evidence of possible erosion and deposition or trough migration due to climatic processes, as well as cyclical variations in the orbit and rotation that have shaped this Martian region and its evolution, giving an insight of the climatic history of the Planet. With radar sounding data acquired with the Shallow Radar Instrument (SHARAD), it is possible to detect the reflection caused between the layers of the Martian Polar Cap, product of the dielectric properties of the NPLD. By studying these layers, it is possible to detect the trough migration path, which has been mapped with industry software, using the data mentioned before in order to acquire a 3D cross section of the different layers, in which the possibility of trough migration, deposition, as well as the unconformities and wind patterns are evidenced along the trace of the stratigraphic reflector. These reflectors are divided into their respective regions in the NPLD, and a horizon is created for the reflector that is analyzed with radar mapping software. For this project, the reflector R15 has been chosen, since the trough migration path as well as several unconformities can be easily observed as part of this horizon. It is necessary to understand that the uncomformities correspond to a certain erosion of the layer, in which later on more material was deposited on top, forming a space between two sections of the same initial layer, called the unconformity. Due to this division, it is possible to observe a trough migration path, in which the trough can be seen initially as part of the layer that has eroded, and then later be seen after the unconformity. By studying these paths, the objective will be to understand the formation of these troughs and finding the presence of the unconformities, as well as the impact of the climatic processes and the wind patterns in the NPLD, affecting the region by erosion or deposition, performing this analysis for the first time with three-dimensional data, with the possibility of comparing these new findings with respect to the available 2D data, resulting in new interpretations or confirming previous theories.
Supervisor: George Zhu
Profile: Aiden is a 4th year engineering Student. He joined Dr. Zhu’s summer research project as it aligns with his research interests of developing and working with AI, robotics, and space technology.
Abstract: Robotic Capture and Collision Avoidance in Simulated Space Environment
Space exploration relies heavily on the development of artificial intelligence (AI) for in-orbit robotic capabilities. Specifically, the capturing, manipulation and collision avoidance of spacecraft are key enablers for common tasks such as satellite maintenance, docking, or orbital debris removal. Until recently, most of the notable maintenance tasks have been performed by astronaut extravehicular activities, which are very risky by nature. There has been little to no orbital debris removal and an exponential increase in the number of spacecraft in orbit. This research project focuses on developing the fundamental aspects needed for these common tasks in space exploration with the use of machine learning, reinforcement learning, and related techniques. Through a series of experiments, we plan to replicate some of the challenges and conditions that AI may face in the typical space environment. This includes factors such as the limited power and computing capabilities that are available in satellites, or how the high reflectivity of spacecraft can affect 3D interpolation capabilities. The project has also focused on AI-enhanced collision avoidance and path planning that will be tested within these experiments. This will mainly be executed through the industry's leading software library, ROS (Robot Operating System), as it is highly compatible and can be easily applied in the space industry.
Supervisor: Regina Lee
Profile: Yanchun is a Space Engineering student conducting research on developing a payload for the upcoming RSOnar mission. He is currently developing methods to identify RSOs, and interpolate their attitude profiles from image sequences. He has a great interest in remote sensing and Space Situational Awareness (SSA).
Abstract: Demonstrating Feasibility of Integrated RSO Detection and Tracking Workflow With Star-Tracker Grade Sensors and Particle Swarm Optimization Simulator
The rapid increase in Residential Space Objects (RSO) has posed challenges to space safety. Various methods of detecting and tracking RSOs have met limited success in creating effective workflow in the automatic RSO identification process. In August 2022, the RSOnar team at York is launching a payload onboard Stratos balloon to demonstrate the feasibility of using star-trackers in RSO detection and tracking. This research focuses on supporting the RSOnar mission through optimizing payload assembly, as well as calibrating the Space-Based Optical Image Simulator (SBOIS) to analyze light curves for attitude determination.
To optimize the RSOnar design for assembly, different orders of assembly and connector/bracket placements were tested to discover their most optimal arrangements without altering structural properties. In calibrating SBOIS, I used the extracted light curves of NEOSSat from my 2021 LURA project as inputs. Since we possess its real attitude profiles, using these light curves greatly simplifies the process of determining the accuracy of simulation results.
With multiple fit tests, my colleagues and I have finalized the frame design of RSOnar and optimized it for easy assembly for hardware debugging purposes. Assembly instructions were also made for rapid disassembly and assembly for debugging. For SBOIS, possible improvements were identified and implemented within its optical swarm optimization. The simulator can now reproduce light curves and attitudes much like real ones inputted, demonstrating the feasibility of reconstructing attitude profiles. With the launch of RSOnar and calibration of SBOIS, we are one step closer to creating an integrated automatic RSO detection and tracking workflow that could be used widely, improving space situational awareness.
Supervisor: Isaac B. Smith
Profile: Sabrina is a Space Engineering Student entering her fourth year. With the goal of working in the research industry in the field of Earth and space sciences, LURA/USRA has given her an opportunity to assist in the research of mapping water ice depths on the Martian subsurface.
Abstract: Identifying Subsurface Ice in Phlegra Montes, Mars, Using the Mars Reconnaissance Orbiter’s Shallow Radar (SHARAD)
The Martian surface and subsurface are confirmed to have abundant water ice at the poles and mid-latitudes. Evidence from radar, thermal, and imagery data suggests the presence of shallow ground ice and the existence of debris-covered glaciers. This project aims to characterize ground ice and glaciers in Phlegra Montes, a chain of massifs in the mid-latitudes of Mars (30°- 50° N) with many periglacial and glacial features. Of importance, the southern portion of Phlegra Montes has been suggested as a possible landing site for the first human exploration as accessible water ice and solar energy (from low latitudes) are significant requirements for local resources. This study aims to support the mapping of water ice by identifying areas and features within Phlegra Montes that exhibit geomorphological features consistent with shallow ground ice and glaciers. A combination of orbital radar and image data from the SHARAD (SHAllow RADar) and Context Imager (CTX) instruments on NASA’s Mars Reconnaissance Orbiter. Using two geospatial software platforms: JMARS and SeisWare, reflections are identified when compared to clutter simulations and then mapped on the SeisWare platform to showcase ice depths throughout areas of particular interest. These areas include the Northeastern Lobate Debris Apron and LDA 1694 (inside the mountain chain). Post-mapping of these regions shows a high probability of shallow ground ice at sites where flow features can be identified using CTX imagery in JMARS. The results of this investigation will contribute to a better understanding of the ice distribution at Phlegra Montes and may support the selection of a landing/colony site there. The project may also provide targets of interest for the International Mars Ice Mapper (I-MIM) Mission.
Chakka Vijaya Aswartha Narayana
Program: Mitacs Globalink
Supervisor: Ping Wang
Abstract: An Efficient Partial Model Training Strategy for Federated Learning with Resource-Constrained Edge Devices
Federated learning is a distributed training strategy that enables collaborative training of multiple edge devices without accessing the data stored in these devices. In this strategy, the edge devices train a local model with the data they have, and these modal parameters are then shared with a server that aggregates all these parameters together to generate a global model. This process is repeated until a threshold performance is achieved. This strategy has helped many apps and products such as Google Gboard which are powered by deep learning in achieving enhanced personalized results. Despite these benefits, the performance of deep learning algorithms suffers in this setting due to the presence of devices having heterogeneous resource constraints. In a real scenario, devices may have different memory and communication bandwidth, leading to overall performance degradation if not addressed appropriately during the aggregation of local models. This research focuses on providing an efficient strategy that generates a personalized subnetwork for each edge device which is in accordance with its resource constraints. The proposed strategy is inspired by already existing ideas of Dropout and DropConnect methods which are often used for addressing the problem of overfitting in neural networks. A comparison of the proposed method with existing strategies on metrics like convergence time, performance, and memory usage will show all the benefits of this strategy over the existing methods.
Supervisor: Suprakash Datta
Abstract: An Empirical Examination of Various Reinforcement Learning Algorithms in a Deterministic Environment
Reinforcement learning (RL) is a powerful learning tool that allows a computer agent to use its past experience of rewards received from its actions to formulate an optimal decision-making policy to maximize future rewards. As the use of computers as learning/semi-autonomous agents has increased in recent years, so has the importance of RL in training these agents in contexts where other machine-learning methods are unsuitable.
To provide a controlled and deterministic test environment to test these concepts, the 8-bit video game Arkanoid (a variation on a pinball game) was used in conjunction with the open source Gym AI and Retro testing environments to teach a computer agent to play the game, using only the raw pixel data from the screen display run through a convolutional neural network and a rudimentary reward function based on the agent’s score and remaining lives. In our experiments, the RL agent demonstrated slow but sure progress towards learning how to solve the game.
Recent research have proposed other, potentially more effective approaches to training an RL agent, including use of both full screen and a smaller, “zoomedin” version of the screen centred on the playable character, in parallel; as well as introducing the concept of “regret” to allow the RL agent to recall the negative consequences of specific actions that led to negative outcomes; in order to speed up the convergence of the RL agent to its learning goal. The results of these alternate approaches to training will be examined and any generalizations of these empirical results will be discussed.
Program: Mitacs Globalink
Supervisor: Ping Wang
Abstract: Caching Strategy based on Deep Reinforcement Learning for IoT networks using Transient Data
In recent years, the Internet of Things (IoT) has grown steadily, and its potentials are now clearer. The primary limitations of these networks are temporary data creation and scarce energy supplies. Apart from this minimum delay and other Quality of Service requirements are still requirements that should be met. While overcoming the unique restrictions of IoT networks, an effective caching policy can assist in meeting the general standards for quality of service. Without requiring any prior knowledge or contextual information, we may create a powerful caching system by utilising deep reinforcement learning (DRL) algorithms. In this work we aim to improve the cache-hit rate and reduce the energy consumption of the IoT network by proposing a DRL based caching scheme. The freshness and the lifetime of the data Is considered. Furthermore, we also consider the size of the data files to be different which is something not yet been explored. This makes our work more suited to the real-life scenario. We suggest a hierarchical architecture to deploy edge caching nodes in IoT networks in order to more accurately capture the regionally distinct popularity distribution. Extensive testing reveals that our suggested solution performs better than the well-known traditional caching policies by significant margins in terms of cache hit rate and energy usage of the IoT networks.
Supervisor: Maleknaz Nayebi
Profile: Xuchen is a computer science graduate with an interest in machine learning, image processing, and related application development.
Abstract: Image Capturing and Processing to Improve Software Quality
Being time-saving and informative, image sharing has reformed communication on social platforms, and developer communities are no exception. Software developers are sharing more images to communicate within teams, yet most productivity assistance tools are based on textual content. Previous studies have proved the increasing importance of mining and utilizing images within software development decisions for two developer communities, Bugzilla and Stack Overflow. However, not all software development tasks benefit from additional multimedia information. In this study, we focus on empirical evaluation within Mozilla Foundation to (i) determine the development tasks that benefit from image sharing and processing, (ii) develop a recommender system, and (iii) develop a web extension to identify and capture informative images from open windows. The performance and impact will be qualitatively evaluated by Mozilla developers. The decision support system can assist to file explicit bug reports. We benchmarked with classifiers on textual and non-textual behaviors of Bugzilla tasks and created a taxonomy of image types. We will build a feature neural network between attributes and images, which is our extension’s backbone. For new bug reports, the web extension scans through open windows and extracts features to match with historical attached images. Ones will be selected as candidate images if the similarity is high and the most relevant parts of the window are cropped out and attached to reports. Time consumption and qualification requirements to file bug reports will be reduced by our tool.
Supervisor: Marin Litoiu
Abstract: Template-based Database Resource Estimation of Database Workloads
The correct measure of database resource estimation is a challenging issue in a database management system (DBMS). Without understanding resource estimation, multiple web queries that access a database system can lead to poor performance. Though there has been progress with determining the resource estimation with single query executions, there does not exist an effective approach to determine the resource estimation with concurrent queries. This research focuses on grouping similar queries into batches and using machine learning models to predict the resource estimation for those query batches. The ability to predict the total resource cost of concurrent queries is paramount to DBMS as the predicted resource value can provide insight into ways to avoid database failures.
UN SDG 9: Innovation in Space Science & Engineering
Supervisor: Regina Lee
Abstract: Open-Source Photometric Light Curve Processor for Space Situational Awareness (SSA)
Access to space is threatened due to the large, and growing number of Resident Space Objects (RSOs) that can cause collisions in Earth orbit. Space Situational Awareness (SSA), the knowledge of objects in near-Earth space, requires accurate tracking, identification, and characterization of RSOs. Characterization can be done by analyzing the change in an RSO’s magnitude in a sequence of images over time, referred to as a light curve. An open-source image processor is developed to automate the generation of a light curve in apparent magnitude from a series of observations, containing images of standard stars and the target satellite in star-stare mode (SSM) and track-rate mode (TRM), respectively. Using standard stars, all-sky photometric transform coefficients are calculated, which are used to convert unknown, instrumental, magnitudes to a standard system, such that data that spans multiple nights or observatories can be directly compared. The colour transform coefficients are applied to filtered RSO magnitudes to output the time-resolved colour photometric light curve for analysis. Preliminary results from the study on the docking event of two satellites in Geostationary orbit (April 2021, observed by Jim Johnston) showed that the processor can be used to generate light curves that show the change in shape between pre- and post-docking. The capability of automating this process has the possibility of increasing the productivity of researchers characterizing RSOs.
Supervisor: Isaac B. Smith
Profile: Jessie is a 4th-year atmospheric science student who has worked in planetary science for 2 years. She participates in research for the opportunity to share her passion with new audiences through conferences such as Lassonde Undergraduate Summer Research Conference.
Abstract: Cyclostrophic Winds in the Atmosphere of Venus from the Radio-Occultation data collected by Venus Express and Akatsuki
Venus is shrouded by optically thick clouds between 50 and 70 km altitude. By using radio occultation, we can measure how the refractive index of the atmosphere affects a beam as it shines into the atmosphere to determine vertical pressure and temperature profiles. Using data from the Venus Express (VEx) and Akatsuki, we calculated the wind speeds from the thermal wind equation which is given by the force balance between centrifugal force and the pressure gradient force. This study followed the work of Piccialli et al., 2012, and we developed our own techniques for processing the data. Then, we created a latitude-altitude cross-section of the wind speeds, temperature, and various stability parameters of the atmosphere. The nightside and dayside of Venus show different features in the meridional cross-sections of zonal wind speed which illustrates the importance of analyzing data with the effects of local time in mind. The VEx data shows a midlatitude jet at the cloud top, which was also observed by Piccialli et al., 2012. In the Akatsuki dataset, this jet appears, but changes in altitude and latitude with respect to local time. The mid to lower atmospheric structure is not well understood since we have few observations of the winds at lower altitudes. Because the position of the zonal midlatitude jet is related to the global wind structure, this work will shine light on the unknown structure of the Hadley cells in the atmosphere of Venus.
Piccialli, A., Tellmann, S., Titov, D. V., Limaye, S. S., Khatuntsev, I. V., Pätzold, M., & Häusler, B. (2012). Dynamical properties of the venus mesosphere from the Radio-Occultation Experiment Vera onboard Venus Express. Icarus, 217(2), 669–681. https://doi.org/10.1016/j.icarus.2011.07.016
Program: Mitacs Globalink
Supervisor: George Zhu
Profile: Karthik is a third year undergraduate student in Mechanical Engineering from IIT Madras, India. He is working with MITACS to learn more about space engineering.
Abstract: Deployment and Operation of a Tethered cubesat
The Kessler Syndrome is a phenomenon in which the amount of junk in orbit around Earth reaches a point where it just creates more and more space debris rendering orbits inoperable for further missions. The project aims to demonstrate the effectiveness of electrodynamic tethers (EDTs) in deorbiting space debris using a tethered satellite system.
The satellite is intended to be a 3U cubesat containing three components: The Mother and Daughter satellites and the tether. The experiment has two stages – deployment of the satellite in space and its subsequent deorbit using the EDT. The first stage entails designing a storage mechanism for the cubesat and an ejection mechanism separating the mother and daughter satellites in space. In the second stage, onboard electronics are implemented based on the power generation and temperature profiles.
The Mother satellite is designed to be a 3U satellite housing the tether and daughter satellite inside it. During deployment, a pin-puller mechanism is actuated, releasing a spring-loaded door. A 500m long aluminum tape tether connecting the mother and daughter satellites is wound around a fixed spool. Post-deployment, the power provided by the body-mounted solar panels is assessed at various attitude profiles to determine which components may be employed on the satellite. Thermal analyses are also conducted to determine the range of operating temperatures faced by onboard electronics.
A lifetime analysis reveals that the increased drag induced by the tether reduces the lifetime from 2.8 years to just 19 days. The mission would demonstrate not just the viability of such tethers for deorbiting debris from low-Earth orbit, but also an inexpensive way of performing orbital maneuvers.
Supervisor: Alidad Amirfazli
Abstract: An Experimental Investigation on Levitation Stability of Acoustic Levitators
By using acoustic radiation forces to overcome gravity, objects of various shapes can be suspended in the air via acoustic levitation. This study experimentally investigates the stability of levitated objects in an acoustic levitator to further improve the levitation stability. In this study we demonstrate a new method for improving the stability by applying a cover on transducer arrays, subsequently resulting in an altered pressure field, which is easier and more straightforward than improving the stability by transducer phase modulation in transducer arrays. The results show that the applied cover can effectively improve the levitation stability of the acoustic levitator for objects in different shapes. The ongoing work in this project is mapping out the pressure field in the levitator to illustrate the underlying physics of levitation stability.
Sparsh Sai Poosarla
Supervisor: Isaac B. Smith
Profile: Sparsh is a Space Engineering graduate from Lassonde. He is excited to be working with Dr. Smith on research on water ice deposits on Mars poles.
Abstract: Simulated sounder results for different substrates using parameters suggested by Pettinelli for discrete substrate material
The International Mars Ice mapper mission is a Mars orbiter being developed with the goal of the quantification of extent and volume of water ice in non-polar regions of Mars. One of the objectives of the mission is to answer whether there is ice contained within the first 10m of the surface and to find the spatial extents of the deposits. The Ice Mapper orbiter must therefore be able to accurately take radar readings at that depth. Due to the complexity and variety in Mars subsurface soil, theory alone cannot be relied on to determine the ideal radar setup. A python-based Sounding radar simulator was created by Sam Courville for a similar purpose and is being modified by us to determine this. Petinelli had determined and quantified the loss caused by subsurface boulders on Mars, and these properties were added into the simulation along with other features like surface scattering loss and power calculations. Testing various ground layer setups against different frequency radars will help determine the most effective radar for most ground types on Mars. With these simulations, it was determined that an L-band sounder could be sufficient for the required science. Further improvements and verifications can be added into the simulator, with the final step being comparing responses against real life results. With a verified simulator, properties can be easily modified to check whether a sounding radar will perform as expected to a required depth on any astronomical object.
Program: Research Assistant
Supervisor: Regina Lee
Profile: Timothy is 4th year Space Engineering student who chose to participate in the research with Professor Lee as it is an opportunity to learn a variety of new skills and spread understanding of space situational awareness.
Abstract: Development of a Neural Network for Pre-Sorting RSO Observation Images
Space situational awareness (SSA) is the field of acquiring knowledge of the space environment including tracking and characterizing resident space objects (RSOs). Without knowledge of the orbital characteristics of RSOs, the possibility of collisions is severe and can result in critical mission failures and the production of an even greater number of space debris, potentially leading to a runaway cascade. Many algorithms have been developed for the purpose of RSO detection; however, most work on images with specific characteristics. The objective of this research is to develop a pre-processor which applies unsupervised and supervised convolutional neural networks for sorting RSO tracking, and star tracking, images of various optical properties that potentially contain RSOs into categories based on their characteristics, for the purpose of creating datasets optimized for different methods of RSO detection. The accuracy of the algorithm can be assessed through visual confirmation that the images in a category possess a distinct characteristic not shared with other categories. This allows RSO detection algorithms to efficiently find and select images for further processing. This expanded set of fully categorized and labelled data allows detection of RSOs to become a faster process and provides more information as to how to improve the process at all levels, from hardware placement to RSO size and identity confirmation.
UN SDG 9: Innovation in Software & AI
Program: Research Assistant
Supervisor: Maleknaz Nayebi
Abstract: Large Scale Replication Study on Release Planning Methods
As part of any incremental and iterative development process, release engineering is important. The platform-mediated software distribution networks (such as app stores) enables fast feedback from customers. The conjunction of open access to customers’ feedback as well as the similar software products opens up the opportunity to look beyond the fence of organization boundaries for software development. Over the past few years, many researchers invested in feedback driven release engineering methods. Some studies involve classifying and linking user feedback with commits, issue reports, or source files, supporting developers in filtering and addressing large amounts of user feedback. Other studies focus on feature prioritization—determining the urgency of additional features or bug fixes through user feedback. Many methods have shown promising results, yet there is no benchmark for comparing their performance. Building on top of existing knowledge is important for scientific progression—replication enables validation and comparison of existing methods, and creates an additional implementation for future researchers to reference. In this study, we provide tools and techniques for replicating six different methods on release planning of mobile apps and compare their performance on one unified dataset.
Jarod Anjelo P. Lustre
Supervisor: Robert Allison
Abstract: Stereoscopic Depth Constancy from a Different Direction
Our eyes gather information that is ever-changing, but our perception of the world usually remains constant. These perceptual constancies allow us to retain our knowledge of the physical attributes of objects–their size, color, shape, and so on–despite changes in viewing conditions. One example is depth constancy, which refers to our ability to maintain a relatively constant perception of an object’s extent depth at different distances. For instance, the retinal image of a rod pointed toward you may become increasingly shorter the farther it is, but you remain aware that it has the same length at any distance. One of the most compelling cues to depth is binocular stereopsis which derives depth information from the differences between the images in the two eyes. Most research on stereoscopic depth constancy focuses on the effect of objects presented along the midline of the head, leaving much to understand about how objects positioned away from the midline affect depth constancy. This research investigates how the visual system maintains depth constancy for stimuli in different directions across the visual field. To do this, we are developing a virtual reality experiment in Unity that simulates the experience of viewing two rods with different displacements laterally or radially relative to the midline. Analyzing the perceived relative depth of rods located at different directions and distances from the head may help identify how depth constancy is maintained in these conditions. We argue that the egocentric location of an object, besides distance, is pertinent in understanding stereoscopic depth constancy and these experiments will provide empirical tests of this assertion. Exploring this relationship also underscores the relevance of concepts like the Horopter to depth constancy, which may provide a theoretical framework for predicting perceived depth in the real world and effectively simulating it in virtual worlds.
Jiahao Li and Amin Mohammadi
Supervisor: Richard Wildes
Abstract: Audio-Video Scene Recognition
Video and audio are commonly used as different information sources in artificial intelligence (AI). Although there are many promising algorithms for understanding video or audio information, they are rarely incorporated together. Scientists are currently focused on developing algorithms that take into account both visual and audio data. However, the shortage of significant datasets has limited this research. To address this shortage, we are building a new audio-video dataset of real-world scenes (e.g., fire, car traffic, fireworks, etc.). Our collection will include 10,000 videos from 20 different classes, making it the largest dynamic scene dataset ever made and enabling research on the combined processing of video and audio data. To ensure that the scene patterns are properly captured in this dataset, we diligently searched for videos that satisfy the following two criteria: (1) Videos should last between 5 and 10 seconds. (2) Videos are cropped to ensure that the pattern of interest takes up at least 90% of the spatial dimension. Our dataset will be structured using two additional complementary factors in addition to the dual information sources: spatial appearance (e.g., colour and texture) and dynamics (e.g., motion) to investigate the purpose of each in scene recognition as well as their interaction with audio and video sources. We will create, test, and train scene identification algorithms using cutting-edge AI approaches (e.g., convolutional neural networks and machine learning) with our dataset. This dataset can also be used in neuroscience to study how the brain reacts to both audio and visual information. Additionally, this dataset has other uses, such as enhancing the video-audio precision of home displays.
Supervisor: Hina Tabassum
Abstract: Deep Reinforcement Learning for AoI-Aware Blockchain-enabled Wireless Networks
Blockchain, commonly categorized as public blockchain and private blockchain, is widely applied in dynamic networks that require real-time information updates to enhance security. While transmission and consensus latency are the key performance metrics, age of information (AoI) is a more accurate metric as it measures the degree of data freshness, which is the elapsed time since the last successful data update. Considering the fast-growing IoT networks, achieving average AoI (AAoI) optimization in a private blockchain, such as Hyperledger Fabric (HLF), is more suitable than public blockchain. It contains a central authority that controls all ledgers and is more scalable and faster, whereas public blockchains such as bitcoin tend to have lower performance for large numbers of transactions. In this research project, we first performed a literature survey and discovered that the problem of AoI minimization in a private blockchain network has not been tackled to date. Also, the definition of AoI is not rigorous. Thus, in an HLF-enabled network, we first crafted a more precise AAoI metric by considering all phases from the transaction generation to transaction validation, i.e., the transaction generation, transmission, endorsing, ordering, and validation phases. Then, we plan to apply deep reinforcement learning methodology, which can dynamically learn from given data and minimize AAoI. We identified the current packet successful transmission rate as state, power of source IoT sensors as well as block size, which determines the maximum number of transactions in a block, as action, and AAoI as a loss function. Our results are expected to outperform conventional optimization based AAoI minimization algorithms both in terms of performance and computational time complexity.
Supervisor: Gunho Sohn
Profile: As a LURA program participant of 2020, Naeem aims to continue exploring machine learning paradigms, notably semi-supervised learning, as well as techniques like deep learning providing him a broader foundation in machine learning, given the importance of data-driven AI in our data-abundant world.
Abstract: Semi-Supervised Land-Water Classification of LiDAR Waveforms under the Mean Teacher Model
Deep learning techniques have been successfully used in the classification of LiDAR waveforms (descriptions of the distribution of energy returned to a sensor from a light source). With the abundance of such data, labeling can prove to be costly and time-intensive. As opposed to supervised learning, where a model is trained only on labeled data, semi-supervised learning is a machine learning paradigm that also takes advantage of unlabeled data during training. Here, we employ the Mean Teacher Model, a method of averaging model weights during training, to improve the accuracy of our supervised land-water waveform classifier within a semi-supervised setting. Consequently, we are able to attain similar performance to the supervised architecture, given fewer labeled waveforms.
Program: Research Assistant
Supervisor: Maleknaz Nayebi
Abstract: Software Ownership in the Healthcare Ecosystem
Ownership of software artifacts has become a point of interest to many software companies as the quality of the software is highly related. Researchers modelled the ownership of software artifacts with different models and to a variety of code and developers' performance metrics. This project particularly targets the ownership problem in the industry domain and with an industrial partner, Brightsquid. With the impact of the COVID-19 pandemic on enforcing distributed teams, the company recently changed their practice to recognize and improve developer accountability. To this end, the product manager asks developers to voluntarily choose user stories and be responsible for the entire pipeline from the user to the deployment. Previously, the product manager decomposes each user story into development tasks and assigns each task to a developer. While the team perceived a positive impact from the accountability perspective we, in particular, are answering “How does the assignment of user stories to developers impact code ownership in comparison to the task assignment model?” We adopted a model used and published by Microsoft and replicated the ownership measure in our study on BrightSquid. We measured ownership metrics (both dependent and independent variables) With the use of machine learning classifiers on the data and performing a 10-fold cross-validation test on the models, we compared and controlled the ownership status and its impact on code quality before and after the new model.
Supervisor: Robert Allison
Profile: Faruq is a 4th-year Software Engineering student at York University from Nigeria, with an interest in Human-Computer Interaction (HCI), and Extended Reality (XR) development. He is participating in the LURA research program to gain more insight into the field of XR and to see how academia plays a part in its development.
Abstract: Exploring the Impact of Immersion on Teleoperated MASS
Maritime Autonomous Surface Shipping (MASS) has the potential to provide benefits like less risk to seafarers, economic efficiency, and reduced environmental impact. As this technology progresses, and we improve on the levels of automation, we may get to a point where the operators do not need to be on board the vessel during a voyage, which would potentially lead to the need for remote human monitors, often referred to as Shore Control Center Operators (SCCOs). It is important to design the interfaces to ensure that there is still efficiency in monitoring the autonomous vessels, and immersion can be a way to ensure that key factors in remote monitoring, like Situational Awareness (SA), and Trust in the autonomous vessels are at levels where the operations are not negatively affected. However, the effects of immersion on the Trust and SA in the context of MASS are not well known. By carrying out user studies and testing 3 increasing levels of immersion on a Shore Control Center (SCC) simulation, we can better understand how immersion affects the SCCO. So far, we find that there is a positive correlation between how usable the participant finds the system to be and the Trust that they had in the autonomous vessels. We also learn that the Mental Workload (MWL) has an all-around negative correlation with the SCCOs SA in all the levels of immersion. Another interesting thing we learn is that the participants tend to have higher SA in the middle level of immersion compared to the low and high levels. The results so far give us a good indication of possible ways to design the interfaces for SCCOs and hint that a moderate level of immersion may be the way to go when designing said interfaces. The research seeks to examine the impact of immersion on Trust and SA, in the context of MASS with the aim of aiding in the development of an Intelligent Adaptive Interface (IAI) that will help to improve safety and efficiency of Human-Robot teams.
Mohammed Ahmed Fulwala
Supervisor: James Elder
Abstract: Jersey Number Recognition in Team Sports
Identifying players in team sports video is an important step in computer vision-based sports analytics. Player identities are essential for analyzing the game. In this report, we aim to do this by automatically recognising jersey numbers that are located at the back of the players’ jerseys. In team sports like hockey, it is difficult to recognize players because of their similar appearance and hidden facial features through protective gear like helmets. Their only identifying features are their team identity and jersey number. Jersey numbers for each player are only clearly visible when the player is with their back to the camera. Therefore, in the majority of video frames jersey number for a particular player is not clearly legible. The first step in jersey number recognition system is to establish if the numbers are legible or illegible. To solve this problem, we use a machine learning approach. We train a convolutional neural network to do a binary classification of player images into “number legible”/ “number illegible” classes. To train our network we extracted and labelled hockey player images from thirteen pre-recorded hockey. Player images have been manually labelled according to jersey number legibility. We use this dataset to both train and evaluate our network. This is a crucial step in jersey number recognition and will help with other higher-level sports analytics tasks including long-term player tracking.
Program: Research Intern
Supervisor: Maleknaz Nayebi
Abstract: Product Families in Mobile App Platforms
There is more than one software product that is offered or developed by an organization to deliver a service, but the relation between these software products and their use is not always clear. With the emergence of mobile app stores, many organizations (whether it is your local grocery store or your bank), choose mobile apps along with the other modes of delivery (i.e., in-person visits, phone calls, websites). Some went a step further and introduced multiple mobile apps. With the advent of mobile phones, software development has evolved mobile application development to the point where a developer can release multiple product(s) to satisfy a set of common requirements. Yet, it is not clear if these different modes of delivery are a marketing strategy, an emerging trend, or an engineering need.
A group of multiple related products, (or a product line) can be identified through careful analysis of the app internal structure, package use, features (seen from the end-user’s perspective) and natural language processing (NLP) of descriptions and reviews. In this project, we retrieve mobile app families from app store and identify: (i) their relation with each other, and (ii) their different modes of service delivery on the internet. We then perform holistic data mining using NLP and statistical methods to retrieve similarity and differences between the apps of the same family.
Supervisor: Gunho Shon
Profile: As a 4th-year software engineering student, Hongru is participating in the summer research program to continue developing his academic research skills and gain expertise in research.
Abstract: Labeling on York University Senmatic Lidar data
3D semantic segmentation is a critical computer vision task required for many applications such as autonomous vehicles, virtual reality and 3D reconstruction. 3D semantic segmentation assigns pre-defined class labels to individual point data. As a consequence, an entire point set is segmented into semantic classes, each of which shares a unique class value. For the last decade, deep convolutional neural networks (DCNN) have successfully demonstrated their success outperforming traditional machine learning and computer vision technologies in 3D semantic segmentation with large margins. However, the state-of-the-art (SOTA) DCNNs are still facing challenges, especially in dealing with large-scale objects such as buildings and roads in urban environments. In this project, we conduct a feasibility study on DCNN-based semantic segmentation using 2D bird-eye-view (BEV) projection converting 3D point clouds to 2D dataset to simplify the semantic segmentation task, while addressing issues to increase the size of receptive fields and to handle irregularities of point clouds. To achieve this goal, we use an open source semantic segmentation toolbox, called MMSegmentatoin. This pytorch library provides 2D SOTA semantic segmentation networks. Using these baseline networks, we will implement a data processing pipeline to convert 3D point clouds to 2D datasets. This BEV conversion pipeline will be validated based one DALES (Dayton Annotated Laser Earh Scan) dataset, which contains over half billion hand-labelled LiDAR data points within 10 square kilometres area. Secondly, we conduct a comparative study for evaluating the performance of MMSegmentation with DALES BEV dataset and also 3D DCNNs such as KPConv network with respect to accuracy and efficiency. We expect that the outcomes of this project will provide a useful insight on BEV’s benefits to advance 3D semantic segmentation in future.
Supervisor: Regina Lee
Profile: Vithurshan is an undergraduate Space Science student. His interests lay in nanosatellite design and space exploration
Abstract: Database Architecture for SSA Ground Systems Server for Optical Data Processing
The primary objective is to implement an autonomous processing algorithm and categorize the optical images for Space Situation Awareness (SSA) from diverse sources and formats. Optical images must be stored in a centralized Ground System Server (GSS). Thus, allowing the partners and researchers to collate, modulate and access the stored images readily of their liking. It should also be capable of processing the stored images autonomously using Resident Space Object (RSO) identification and categorization algorithms constructed by researchers. The processed images are labelled and required to be stored in a manner, allowing images to be fetched out with their corresponding labels. Design-based decisions are enforced to remove constraints, which could be imposed on GSS in the distant or near future. The long-term goal is to engineer a web application programming interface (API) with the functionality of performing these objectives stated above to the public. The labels should be capable of describing the image on their own.
Problem Statement: Each algorithm created by the researchers applies a differing level of processing to optical datasets, which is used to categorize and compartmentalize the data. The GSS must merge and assort data into a singular structure where all the researchers could process newly brought in image datasets using multiple algorithms autonomously. The data stored within the GSS should also be accessible to anyone with login credentials at any time when requested. The researchers would use a Graphical User Interface (GUI) to access the GSS and the materials contained within. The GUI would permit the researchers to fetch requested data in the file format of their choosing such as CSV, Mat file, HDF5 etc. In addition, the research must address the following questions: What is the most efficient way to store large formats of optical image data? How should we organize the processed image data and their attributes within their respective bins? How can we manage the diverse collection of data, so it can be easily understood, maintained, and distributed?
Methodology: HDF5 allows for organization, access and sharing of substantial amounts of information with the inclusion of metadata. HDF5 image datasets are turned into NumPy arrays. Attributes are sensor data (temperature, pressure), image capture mode, time & date, and origin of data (NEOSSAT, field camp) and can be attached to their respective NumPy arrays. Groups are folders holding data from a specific type, whether data from a specific origin or data of a similar type.
Expected Outcome: Procedures in the Methodology section must also be directed for the RSOnet, Cane (Diane’s), Gabriel’s and Randa’s Algorithms. The organization and categorization for stages 0 and 1 have been completed. Chapter 3 from RSOnet: An Image Processing Framework for a Dual-Purpose Star Tracker as an Opportunistic Space Surveillance Sensor will be used as a basis for the part as it explains the dataset in stage 1 and the procedures behind them. The final deliverable is to have a system server where users can input data and retrieve data with processing algorithms of RSOnet, Cane and TKNTK. A functional graphical interface along with a system server will be supplied and will be built using the Tkinter GUI framework. A user manual and software documentation will also be supplied
Contribution: The GSS would provide efficient collaboration among researchers, as it lets algorithmic results for specific datasets be compared with one another. Now, the transfer of information among the members is done manually, which can be very tedious and time-consuming. Enforcing a name convention and structure to the data storage would enable continuity and avert disarray. A web API could show the significance of SSA to the public, as they would be able to upload their images to process with our algorithms. Thus, breaking down the general public’s perceived notion of SSA and creating awareness.
UN SDG 11: Sustainable Cities & Communities
Supervisor: Peter Park
Profile: David chose to participate in research to use creativity in solving real-world problems with more flexibility than may be allowed outside of an academic environment.
Abstract: Microsimulation Analysis of Truck Signal Priority for Long Combination Vehicles on Arterials
Truck transportation plays a vital role in the Canadian economy providing goods to homes and businesses. Trucks deliver 72% of the 875 million tons of domestic goods that the country produces (Transport Canada, 2016). In 2021, Canada experienced a shortage of truck operators with 18,000 unfilled positions (Trucking HR Canada, 2021). Trucks with multiple trailers known as Long Combination Vehicles (LCVs) have been adopted to keep up with the demand for goods in the region while reducing the number of required operators. However, LCVs cause travel delays due to slower acceleration and deceleration rates compared to cars and trucks. Truck signal priority (TkSP) enables trucks to stop less frequently at signalized intersections by modifying the green light signal timing. The goal of this research is to investigate TkSP strategies on a 5 km corridor of Dixie Road in Peel and identify optimal traffic conditions while accommodating LCVs. Traffic data including current signal timing, traffic volumes, and travel times are used to calibrate Vissim microsimulation before simulation testing. Various configurations of TkSP and detector layouts are analyzed to improve traffic performance. The best strategy for TkSP will change from road to road but provides another useful tool to consider when planning and improving roads experiencing a high volume of truck traffic.
Program: Research Assistant
Supervisor: Gunho Sohn
Profile: I am Jared Yen and I have recently finished my undergraduate degree in Geomatics Engineering at YorkU. I spent a summer working as a survey assistant and performed many topographic surveys. After taking a computer vision course, I noticed the potential in using imagery to detect meaningful information like I had done as a survey assistant and decided to study further.
Abstract: Extracting Urban Roads and Lanes from Aerial Imagery with Artificial Intelligence
The position of roads and lanes provide important data for high-definition maps used to aid navigation for autonomous vehicles, monitor infrastructure and many other applications. Very high-resolution aerial imagery acquires the lane-level details required for these applications quickly and over a large coverage and automatic extraction. The automatic extraction of roads and lanes saves time and resources when map-making. Semantic segmentation of road and lane pixels from an image is a preliminary step for extracting more meaningful information used in maps. Convolutional neural networks have shown great performance improvements in semantic segmentation tasks. In this work, two open-source, state-of-the art road and lane extraction methods are tested on a dataset of very high-resolution aerial images acquired over the City of Toronto. Both methods use a convolutional neural network for semantic segmentation and builds upon the results to generate graph networks of lanes and roads more suitable for digital maps. The effectiveness of these methods on the acquired dataset are evaluated.
Supervisor: Mehdi Nourinejad
Abstract: Dynamic Parking Pricing Using Real-Time Payment Data
Parking is high in demand and low in supply in urban cities with limited remaining infrastructure to be allocated to parking. The mismatch in parking supply and demand has adverse consequences including long parking search times and traffic congestion. . The City of Toronto currently enforces an hourly pricing strategy using its app Green-P to manage parking demand by requiring drivers to pay based on their parking duration. However, the existing hourly pricing strategy does not demonstrate the full potential of pricing as a parking management strategy. My research investigates the impacts of a dynamic parking pricing strategy to adjust parking rates in real-time to better balance the supply and demand of parking. A dashboard developed through the ArcGIS software will portray a comparison between the current hourly pricing strategy and the proposed dynamic pricing parking strategy using data analytics and modelling visualization.
Kumar Vaibhav Jha
Program: Research Assistant
Supervisor: James Elder
Profile: I am currently a 4th year undergraduate computer science major at York University. I choose to participate in research as I would like to explore the potential opportunities that research can lead me to. As someone who is interested in pursuing higher studies (such as graduate school), doing research is a good way to determine if graduate school is for me or not. Along with this, I have a keen interest in computer vision and machine learning and being able to learn more about these topics while also getting to work with them is a dream come true for me.
Abstract: Analysing Various Multi Object Trackers on Novel Dataset to Determine the Most Effective Tracker
Multi Object Tracking (MOT) is one of the most widely used and researched fields of computer vision. Tracking involves associating detections between different frames of a video to the same object and storing the movement of the object as tracks. One application of tracking is within traffic analysis, where-in object tracking can help improve traffic monitoring and analysis. The goal of this project is to take an unlabeled video of a traffic intersection and analyse the performance of different trackers on the dataset. One major issue with trackers that has been found on this dataset is the existence of fragmented tracks due to vehicles crossing paths with each other. This results in many partial tracks that do not provide a lot of information and limit proper further analysis of the video. The ideal tracker would be resistant to this and would limit fragmented tracks to improve further analysis. In order to analyse the trackers in a qualitative way, we first labelled the video with tracks in order to establish ground truths. These ground truths were then compared to the tracking results to determine the quality of a tracker. The most commonly metrics used to compare results to ground truths are the MOT challenge metrics, these include metrics such as Multiple Object Tracking Precision (MOTP) and Multiple Object Tracking Accuracy (MOTA) which measure accuracy of the tracks as well as other metrics which measure the quality of tracks. We attempted to use different simpler metrics in order to make the results easier to compare. The goal is to create metrics that focus only on how closely a track matches the tracks in the ground truth, similar to how segmentation is evaluated. This will make future evaluations of trackers easier and faster for this specific dataset.
Nick Di Scipio
Program: Research Assistant
Supervisor: Andrew Maxwell
Abstract: Modifying And Optimizing The SARIT Micromobility Vehicle For Specific Use Cases On York's Campus
The SARIT (Safe Affordable Reliable Innovative Transportation) is a 3-wheeled micro electric vehicle designed to be a solution to combat both greenhouse gases produced by combustion engine vehicles as well as mass congestion on roadways. The SARIT organization has partnered with Lassonde and have tasked a research team of Lassonde students to identify and investigate use cases that the SARITs are capable of fulfilling either with their current configuration or as it has been observed in most cases, with special modifications to enhance their function for a specific use. The research conducted by this sub-team pertains to mainly two of the major use cases identified by the research team which involve modifying and optimizing the SARIT's for food delivery as well as for use by York Facilities to fulfill a variety of daily tasks.
Kuimou Yu, Rupayon Haldar
Program: RAY, Research Asssitant
Supervisor: James Elder, Andrew Maxwell
Profile: Hi, My name is Kuimou. I am 3rd year undergrand computer science student who is working on Benchmarking deployments of computer vision mobility systems for various applications this summer. I am excited to participate in the research since computer vision has become more and more important in various areas, inclulding increase people's well-being and productivity of different area.
Abstract: Benchmarking Deployments of Computer Vision Mobility Systems for Various Applications
Computer vision has grown up fast in the last decade. Faster hardware and algorithm allows people to deploy various computer vision tasks in more and more areas. However, small systems such as mobility systems have strict power and size limits that regular general-purpose processors won't perform well with those limitations. People developed different neural processing units that perform significantly better than general-purpose processors to address this problem. To figure out the performance gap between the neural processing units and how they perform in different vision tasks. We benchmark that hardware based on the pedestrian detection task with varying pedestrian detectors in the SARIT platform. Our benchmark included the vision system's performance, accuracy, and power consumption. Our benchmarking will establish a reference for deploying further vision tasks on such platforms. At the same time, we hope that our research can help further the development of hardware and software by revealing the difference in the trade-off between power consumption, performance, and development flexibility on the current system.
Supervisor: Pirathayini Srikantha
Abstract: Effect of Electric Vehicle Charging Loads on The Power Grid
With the advancement of technology and the rise in the integration of electric vehicles (EVs) in the power grid, one of the most pressing questions concerns whether or not the grid can handle the rapid increase in electricity loads or if upgrades to the distribution network are required. The optimal power flow (OPF) optimizes the generational power costs while considering transmission line limitations. This research analyzes the OPF when EV charging loads constitute 0%, 5.2%, 10.4% and 36% of the total system load that an IEEE-14 bus system can operate. This is done by changing the real and reactive power at each of the load buses to account for the various charging loads, while keeping the generation buses constant. This system is taken from the MATPOWER case files, modified, and run in MATLAB. Additionally, the associated power quality (PQ) events are observed at each of the EV loads mentioned above in both steady-state (i.e. stabilized) and transient conditions. The objective of this research is to identify issues in the power grid after the incorporation of EV charging loads. These simulations allow for a cost-effective way to recognize short- or long-term issues with the grid and resolve them before implementation.
Shehnaz Islam, Keandre Webb
Program: USRA, RAY
Supervisor: Marina Friere Gormaly
Profile: I'm a third-year electrical engineering student who is interested in signal processing. I strongly believe participating in this research will help deepen my knowledge of signal processing and learn how to implement it in the real environment. I also believe this will be a great opportunity to help me define my future career path.
Abstract: Using Drones and Object Detection to Statistically Analyze City Litter
Current litter audits in the City of Vaughan are done by staff members physically inspecting streets and accounting for the litter found. The process is labor intensive, tiresome, time consuming and is susceptible to inaccuracy due to human errors. Furthermore, with the average number of smaller-sized litter (e.g. coffee cups, cigarettes) increasing, the task becomes extremely challenging. To resolve this issue, we propose drones embedded with cameras, sensors, GPS information and trash detection software which would fly above human height at relatively high speed to cover vast areas of land to detect litter. Once detected the litter objects will be saved as images and then analyzed to measure the approximate size, weight, height, type and volume and then categorized into different categories of litter. The categorization will be done using a machine learning classification algorithm that would be trained on large dataset of labelled litter images. Litter information gathered can be further used to create insightful statistics such as littering trends and predictions. The litter audit system will enable the city to effectively plan, monitor and control litter pickup and management, take preventive and corrective actions and improve the livability of the city by creating a clean and welcoming atmosphere.
Supervisor: Jinjun Shan
Profile: Gaurav is 3rd-year computer sciences student, His research focus is on Autonomous Unmanned Vehicles and their potential application in military, civilian and security areas.
Abstract: Behavior Trees for Autonomous vehicles
The focus of the project was to architecture the behavior module for a self-Driving car application. The implementation tool used was a behavior tree which looks at a sequence of inputs to modify the commands sent to the controllers used by the virtual car. The specific behavior being codified was to handle a controlled intersection. FSMs are a common way to solve the high-level control problem in robotics and AI but it comes with many challenges. Finite state machines (FSM) are a directed graph-like structure that utilize transitions to change the actions/parameters used by a robot. Transitions are initiated when a condition metric is true. Once at a state some action is performed before proceeding to the next state. FSMs are a common way to solve the high-level control problem in robotics and AI. However, a lot of challenges appear while implementing FSMs. FSMs are known to become unmanageable for large complex systems. To have a fully reactive system, every state needs to have a transition to every other, making it a fully connected graph (O(n2)). To tackle all the problems, behavior trees (BT) are introduced. Behavior trees is a directed graph with a tree structure. Unlike in FSMs, transitions in the BT are implicitly encoded in the tree structure, thereby absolving the developer from explicitly having to implement all transitions. Adding or removing a node or subtree therefore has no impact on the other nodes, which makes development easier and less prone to bugs. Behavior Trees offer an efficient and flexible environment for researchers that want to model vehicle behavior. They can be tested in a simulation environment and transferred to a hardware application.
Supervisor: Maleknaz Nayebi
Abstract: Software and Social Media to the Rescue: Mobile App for the Next Pandemic at York
In the hassle caused by the pandemic, York students, like others worldwide, browsed social media and other unofficial channels for information on logistics and regulations. The lack of software and system capabilities to match the needs of the end-users which in this case drove students into using unofficial channels is called the “requirements gap”. Software is still far behind our expectations and the attempts to close the gap has been unsuccessful. Requirements over the years have been mostly extracted through traditional methods (Interviews, brainstorming, prototyping). This has proved to retrieve good requirements, but the final product is still unsatisfactory for the users. During the pandemic, many software products were aimed at addressing the general public’s needs by assuming their needs. Some believe giving the users what they want is the key to eliminate this gap however this research is based on the fact that users often do not know what they want. In particular, we answer the question “what software services can be offered to students for improving their experience in a pandemic?” For this, we gathered the social media feed of York students from the York University subreddit dated between March 2020 and September 2021. Knowing about the students’ concerns during the pandemic, we aimed at mining(rather than questioning and surveying), extracting their “needs” and mapping them into a software service using an automation pipeline using several Machine Learning models and Natural Language Processing techniques. We demonstrate that our proposed model significantly reduces the gap and provides software services that better match the students’ needs. While we worked particularly on the COVID-19 pandemic, the model developed in the research will be generalized.
Supervisor: Peter Y. Park
Abstract: A New Model for Estimating Pedestrian Level of Service
Arguably, walking is the most important transportation mode especially in urban areas. This transportation mode is an essential component of all trips and allows physical access to various types of facilities. Different approaches to estimate the Pedestrian Level-of-Service (PLOS) have been adapted to examine pedestrian walking conditions. Current approaches are largely based on the concept of estimating vehicular LOS. Therefore, these approaches lack quantitative definition of important pedestrian characteristics since pedestrian movements can be more intricate in comparison to other transportation modes. This limitation results in an inaccurate assessment of real quality and performance of operations of transportation facilities. The goal of this research is to bridge this gap by developing a new model to estimate real time PLOS. Understanding underlying important factors affecting PLOS is the focus to present an improved performance measure. The objective of this research is to present new indicators that better represent the pedestrian’s dispersion and multi-directional movement. Addressing this issue is particularly challenging since the new indicators can be highly correlated to each other to avoid ambiguity and contradiction in determining the true PLOS. However, this highlights how valuable the proposed model for evaluating PLOS will be to transportation engineers and planners as it will aid decision makers in improving existing surface infrastructure such as pedestrian walkway along urban streets and future developments more effectively.
Supervisor: Jinjun Shan
Profile: John is a software engineering student who is passionate about software development, working with others to solve challenging problems, and developing innovative solutions. He is interested in investigating the principles and underlying design and implementation of navigation and localization models that aid in fully autonomous and unmanned vehicles.
Abstract: LiDAR Assisted Fiducial Marker-Based Navigation for Autonomous Vehicles
Automation and robotics offer significant potential and applications including monitoring, surveying, and data collection. Autonomous robots address challenges such as operating over long hours, in bad weather conditions, on uneven and different kinds of terrain, and under various light conditions. The fundamental capabilities needed for these autonomous robots to operate efficiently are accurate localization (current location tracking) and navigation systems. However, most of the current methods for accomplishing autonomous vehicle navigation either demand extensive physical space equipment (such as Bluetooth beacons and Cameras) or are computationally demanding (e.g., Simultaneous Localization and Mapping). To reduce these burdens, we propose a system that relies on light detection and ranging (LiDAR) and fiducial markers to achieve localization and navigation of unmanned autonomous vehicles (UAVs.) Fiducial marker-based (ArUco) detection offers a means of obtaining rapid, low-latency detection of 6D position estimation (3D location and 3D orientation). This system has 3 fundamental capabilities entailing marker detection, state estimation/localization, and navigation. The system first detects ArUco markers (a specific type of fiducial marker resembling QR/barcodes) using an onboard LiDAR detector to scan the surroundings. The relative location of the robot with respect to the marker is then calculated based on the current location of the marker. This relative transform is then utilized to generate a trajectory for aligning the robot with the center of marker. Thus, the proposed system can be used to systematically create an optimal marker network map and supply the requirements for the localization and navigation of UAVs.
Program: Research Assistant
Supervisor: Kamelia Atefi Monfared
Profile: My name is Lamek Berhane. I'm working alongside Alireza Azizi, Prof. Kamelia Atefi-Monfared and Prof. Matthew A Perras on this study (DTS on rocks). I chose to participate in this research study because it has been my interest to be part of a team and contribute my ability as a team player towards advancing something.
Abstract: Characterization of Rock Tensile Strength for Resilient Infrastructure Design
When designing civil engineering projects like tunnels and foundations, tensile strength of the rock is one of the crucial characteristics. There are currently a few well known methods used to measure tensile strength of a rock such as direct tensile test, brazilian splitting test, three-point bending test. For the purpose of this research direct tensile test was employed to carry out the experiment. One of the disadvantages of this method is connecting the sample to the machine. It's possible in some circumstances for a sample's cross sections to lose some strength during this process (for ex. In the case of a dogbone there's a thinner cross section in the middle). In order to tackle this issue, a solid connector constructed of steel plates and rods that was lathed to allow them to be epoxy-glued to the sample and linked to the machine while keeping the sample's strength was implemented. This approach will further use FLAC3D to simulate the tensile numerical study of the experiment.
Sayee Srikarah Volaity, Debadrita Das
Program: Mitacs Globalink
Supervisor: Liam J. Butler
Profile: Sayee and Debadrita are mitacs globalink research interns at the university under the supervision of Dr. Liam Butler. Their research aims to decrease cracking in concrete structure by the usage of self-healing concrete.
Abstract: Bond Strength Recovery between Concrete and Steel Rebar Interface in the presence of Super-Absorbent Polymers
Concrete structures are designed to limit cracking to acceptable ranges, yet, excessive cracking can lead to steel corrosion and a reduction of the structure’s strength, durability, and service life. Repairing demands energy, time, manpower, skill and is also expensive. Self-healing concrete (SHC) is one potential solution to these problems. Adding self-healing agents in concrete enhances its autogenous self-healing tendency to seal cracks, making the self-healing mechanism autonomous, recovering a part of the mechanical properties. Super absorbent polymers (SAPs) are added to the concrete mix as a self-healing agent. They have the capacity to absorb water about 300 times their weight. When a crack forms in concrete, SAPs absorb water from the surrounding environment, store it, and then release it to stimulate self-healing. Our research focuses on the bond strength recovery between the concrete and rebar interface due to the presence of SAPs. Bond strength in concrete structures becomes a vital parameter as it determines the tensile stress the member can bear. Our experimental program includes casting concrete cylinders, initially cured for 28 days by wrapping them in wet burlap. Some specimens are cracked to a controlled width, and some are left uncracked. Following this, the specimens are cured again, and rebar pullout tests are conducted to determine the bond strength. A comparative analysis of the obtained results between normal concrete and SHC, cracked and uncracked, and wet and dry cured specimens is carried out. Results from this study might bring the use of SHC closer to the building construction area and lead to more sustainable structures, owing to the reduction in energy, manpower, and an increment in service life.
Supervisor: John Gales
Profile: Emma is entering her third year of Civil Engineering at York University. Fire Safety Engineering is her area of interest. This summer, she is exploring the behaviors of various materials in fire and gaining valuable lab experience and skills. Specifically, Emma is focusing on steel beam to column connections in a fire.
Abstract: Steel Beam to Column Connection Resiliency in Fire
Structural designs are constantly evolving to include more sustainable and aesthetic features. For infrastructure to remain safe and resilient, the understanding of fire behavior and safety must advance at the same rate. Optimizing fire protection can result in economic and material savings while also decreasing its environmental impact, as excess fire protection can have a negative life cycle analysis impact which directly affects SDG 11, Sustainable Cities and Communities. Although optimizing fire protection is a priority, the Authorities Having Jurisdiction often have limited exposure to fire education and resources to evaluate these designs. A performance-based solution is ideal as it can check all scenarios for optimization, however little research has been done on it. This study focuses on steel beam to column connections, often the most vulnerable part of a structure. To understand the resiliency of these connections, the experimental program consisted of pool fire tests (~900°C exposure temperatures) completed for different fire durations (around 15, 30 and 60 minutes) within York University’s High Bay Lab. These tests were performed to gain a greater understanding of the deformation behavior and the dissipation of forces and heat experienced in the connections of a simple steel frame structure. The tests were filmed using narrow spectrum illumination technology, allowing flames to be filtered out to create a clear image of the structure to quantify how the connections react to the generated heat. Thermocouples were applied on the steel frame structure to measure the temperatures reached. Knowledge from this study will provide insight on how steel connections behave when exposed to thermal forces and allow proper modelling tools to be validated and verified.
Jowel Akkeh, Pavly Yacoub
Supervisor: Peter Park
Abstract: Development of an ArcGIS Online Visualization Tool to Analyze and Display Response Time
Every year there are thousands of fire incidents across Ontario. Every minute removed from the response time is invaluable as it can have significant consequences such as damage, injury, and fatalities. This study developed a tool that assesses the fire services’ response times to incident locations using the city of Vaughan’s 911 data. This tool is applied to visualize which areas, districts, and streets have the highest response times by considering various factors, such as time of day or weekday, to provide an overview of the underlying bottlenecks and areas that delay first responders’ response time. With key problem areas highlighted, city officials can form effective, uniquely tailored solutions to reduce the response times. The tool was developed using ArcGIS online dashboard, a cloud-based web geographical information system for mapping and analysis. It enables users to select incident locations or types of incidents then the filtered data is shown on an interactive map. Specifically, the dashboard can filter fire incidents by month, weekday, alarm type, fire districts, and time of day. It also displays the average response time in the selected areas and fire districts. The changes in response times follow expected peak hour trends, between 7 to 9 am and 3 to 6 pm; the response times increase due to rush hour traffic and congestion. The weekend also shows a decrease in fire incidents; this change is consistent with transportation patterns, as the movement of goods and people decreases. This study can be employed to assist the Fire Master Plan (FMP) for municipalities such as Vaughan; the FMP is an implementation plan drawn up to assess fire protection needs and services in the community. This study can assist in developing those FMPs more efficiently.
Supervisor: Gunho Sohn
Profile: As a Geomatics Science, Nadine is interested in the use of GIS and GNSS in acquiring and generating agricultural and resources data to help reduce the uncertainty when it comes to farming, by implementing better land management practices.
Abstract: Applying 'Domain Adaptation' To Single Tree Detection Neural Network
This research is a subset of the YUTO (York University Teledyne Optech) Tree5000: A Large-scale Airborne LiDAR Data for Single Tree Detection project. The objective of this study is to apply domain adaptation strategies with deep learning methods to transfer tree detection network that is trained from the summer season to winter season. Domain adaptation is the process through which labelled data from a target domain might be restricted, few or unavailable. However, with the existence of labelled data from a related source domain, a neural network is trained and refined to detect objects in the target domain. LiDAR data of the campus was collected during the summer (September 2018) with leaf-on condition and winter (December 2019) with leaf-off condition. In the winter dataset, most deciduous trees have lost their leaves and therefore reduced in crown size and volume, hence the shape of the tree’s changes. The aim of this research project is to utilize the ground truth data obtained from the summer season for winter conditions. The winter data was refined and pre-processed before being fed into the existing neural network using software such as ArcGIS and LAS tools. It is hypothesized that the network would have better chances at detecting coniferous trees as they remain relatively the same during the winter, while the accuracy of deciduous trees will be lower. The availability of annotated data remains the most critical bottleneck for deep learning communities especially for environmental objects. This research will provide an opportunity to save time, given the gross amount of manual labor and time involved in labelling data.
UN SDG 13: Climate Action
Supervisor: Rashid Bashir
Abstract: Effect of Climatic Conditions on the Development of Thermal Loads on a Concrete Bridge deck
Thermal loads can have a significant influence on the structural stability of bridge decks. The thermal stresses acting on a bridge are governed by environmental factors such as temperature, wind speed, cloud cover, and solar radiation. Engineers must first understand the relationship between climatic conditions and thermal loads in order to increase the resiliency of bridge design codes to climate change. The goal of this study is to evaluate the relationship between different climatic conditions and the resulting thermal loads. This is achieved through a statistical study of the results from a three-dimensional finite element thermal model of a typical box girder bridge. The climatic variables serve as input to the model, and the output is the thermal load acting on a concrete box girder stated in terms of the temperature difference between the top and bottom walls of the box cross-section. A total of 108 simulations are run for 20 cities in Canada with varying climatic conditions, utilizing all possible combinations of climatic variables. The findings indicate how the climatic factors impact the thermal load operating on the top and bottom sides of the box girder. The data also highlights the climatic variables that have the greatest influence on thermal loads, notably direct normal sun radiation. Furthermore, a pattern has been identified that demonstrates how thermal loads vary with geographical location across Canada.
Supervisor: Magdalena Krol
Profile: Aditi is in her final year of Civil Engineering. She shares, “participating in research was an opportunity to explore my passion and to challenge myself with hands-on experience in conducting experimental labs”.
Abstract: Effect of Ionic Strength on Bisulfide Diffusion through Bentonite
Waste management is one of the most pressing matters among nuclear energy producers. For safe long- term storage of used nuclear fuel, the Nuclear Waste Management Organization of Canada plans to design and implement a deep geologic repository (DGR) 500m underground within a host rock formation. The DGR will house the copper coated stainless steel used fuel containers (UFC) within an engineered barrier system (EBS) where highly compacted bentonite (HCB) will surround the UFCs. UFC corrosion is one of the concerns in DGR designs as it will affect the long – term integrity of UFCs. Under anaerobic conditions, microbiologically influenced corrosion of UFC could take place where bisulfide (HS-) produced by sulfate reducing bacteria can then transport through HCB and reach to UFC surface and result in UFC corrosion. However, as bentonite has low permeability, HS- transport through HCB will be diffusion dominated. Therefore, understanding the HS- transport behaviour through HCB is critical for accurate prediction of copper corrosion. This study aims to quantify HS- diffusion coefficient (De) of through MX-80 bentonite by performing through diffusion experiments under a range of anticipated DGR conditions. (e.g., temperature, ionic concentration etc.). This presentation will describe the experimental methods and results of experiments performed on the effect of ionic concentration on bisulfide diffusive transport. The result will allow EBS performance evaluations and thus will build confidence in the Canadian DGR design.
Kunwar Aneeq Khan
Program: Mitacs Globalink
Supervisor: Rashid Bashir
Profile: Kunwar is a Mitacs Globalink Research Intern working on the project titled 'Generation of Climate Data for Geotechnical and Geoenvironmental Problems' under the supervision of Prof. Rashid Bashir. Kunwar is interested in understanding the effects of climate change in the engineering sector.
Abstract: Comparative Analysis of Multiple General Circulation Models (GCMs) and Observed Climatic Datasets for Different Climatic Change Scenarios
Climate change studies are being conducted to study the impact of climate change in different fields like geotechnical engineering, geology, hydrology, etc. Future climate data at a daily resolution is required to carry out various impact studies. The purpose of this study is to compare various GCMs and other sources for the prediction of future climatic data and assess their validity for use in impact studies by comparing them with observed climatic datasets. Using only a single GCM or a single source in an impact study may not take full range of possibilities into consideration whereas using multiple GCMs or multiple sources rectifies this issue. In this study, we compiled climatic datasets for the city of Regina, Saskatchewan from Canada Climate Data Portal (CCDP) and Pacific Climatic Impact Consortium (PCIC) for various GCMs. The complied predicted climate dataset was then compared with the historical measured climatic datasets. We used visual comparison and statistical analyses to deduce the reliability of different GCMs in predicting the historical data. It has been reported that this validation process is essential for selecting which GCMS should be used for impact studies. We observed that the predicted historical climate data for GCMs from CCDP did not correlate well with measured values and therefore might not be suitable for impact studies. For climatic data from PCIC we observed that while most of the GCMs predict the historical maximum temperatures well, nearly all of the GCMs underestimate the minimum temperature values. For precipitation, the predicted values show minor variations from the observed dataset for most of the GCMs. Therefore, it can be concluded that the future, climatic datasets from PCIC can be used for climate impact studies.
Supervisor: Mark Gordon
Abstract: Autonomous Pilot Balloon Tracking For Boundary Layer Wind Data Collection
Wind speeds are a crucial atmospheric quantity used to predict the weather and model the global climate. While measurement of wind speed near the surface is cost-effective and relatively inexpensive, measuring the winds in the boundary layer (up to1 km above the ground) is expensive. Wind speeds at the boundary layer are measured using radiosondes, which require expensive infrastructure, and the device is not reusable. This research aims to develop a cost-effective (less than $500) device that can collect wind speed and direction data 1km above the ground without infrastructure. This approach will utilize a historic method of wind speed tracking, that entails the manual measurement of a helium-filled pilot balloon’s position with a theodolite. To ameliorate this, an Autonomous Pilot Balloon (Pibal) Tracking Device was created. That leverages a low-level object tracking algorithm to detect a helium-filled pilot balloon in the air. Using servo motors and a pan tilt mechanism, the device records its angular position in the pan and tilt direction with respect to time, which is used to calculate the wind speed. This research provides a cost-effective way of collecting wind speed and direction data at virtually any location.
Supervisor: Satinder K. Brar
Profile: Anupriya is doing hands-on research and building strong ties within the science community while working with Dr. Brar as an NSERC USRA summer student.
Abstract: Analysing the role of novel bacterial isolates M. esteraromaticum and B. infantis in BTEX bioremediation under different culture conditions
Petroleum oil is a valuable resource which is commonly used for many anthropogenic purposes such as printing, fuel for vehicles and heating buildings. However, the increased use of petroleum over the years has contributed to the significant amounts of toxic and carcinogenic hydrocarbons released into the environment. Of these, BTEX (Benzene, Toluene, Ethylbenzene and Xylenes) are of great concern. BTEX are a group of monoaromatic, and volatile hydrocarbons typically found in petroleum oil. Studies have shown that long-term exposure to the BTEX mixture can cause neurological, respiratory, genetic defects, and other critical health problems. Although chemical and physical removal methods are available, these procedures are expensive and can cause immense damage to ecosystems. In contrast, bioremediation is a more promising and eco-friendly method to remove these harmful contaminants from the environment. In this study, the effect of temperature and different electron acceptors were analysed on two novel BTEX-degrading strains Microbacterium esteraromaticum and Bacillus infantis previously isolated from subsurface soil. Activity tests were performed to better understand the degradation capabilities of these bacteria. The results have been predicted to show that (i) increased temperature is a favourable condition for BTEX degradation and (ii) presence of other electron acceptors (nitrate and sulphate) may affect the patterns of aerobic degradation under minimal oxygen conditions.
Supervisor: Sunil Bisnath
Abstract: Receiver Front-end Integration for Navigation Satellite-based Reflectometry
The freely-available GPS, generally known as Global Navigation Satellite System (GNSS) is widely used for day-to-day navigation. Besides that, GNSS is increasingly utilized for determining the soil moisture via a technique called GNSS reflectometry (GNSS-R). The GNSS-R technique uses the surface-reflected GNSS signals to acquire earth’s surface characteristics. Existing soil moisture measurements using satellite radiometers to map the soil content, which are not cost-effective and rarely available for small areas. With GNSS-R, the measurements are performed at low-cost, and widely available in any open space. The objective of this research is to develop a novel Field Programmable Gate Array (FPGA)-based software-defined radio (SDR) GNSS receiver that is capable of customising GNSS signal surface reflections to measure surface soil moisture content. The front-end of the SDR receiver is connected to two antennas; one zenith facing antenna detects the direct GNSS signal, another nadir facing antenna detects reflected GNSS signals. The front-end captures the GNSS data and feeds it into the SDR receiver, which performs signal processing to determine the soil moisture measurements. The current target milestone is to obtain real-time GNSS signals with the front-end and SDR receiver as a drone payload. The required signal processing is performed simultaneously as the GNSS signals are received. My contribution will be configuring and implementing the front-end with the SDR receiver and enabling data capture in real-time. The expected results will significantly advance GNSS-R application for remotely sensed soil moisture measurements.