Week of 04/09/2018 - Progress Report 15
- Tamara Jovanovic
- Apr 17, 2018
- 6 min read
Tamara Jovanovic
Topics covered: New high-power laser implementation, data collection
Materials used: Optical system and optical instruments, Matlab
This week, I implemented a new high-power laser. After some inconclusive results from the previous couple of weeks of data collection, I decided to switch from the linear laser to the high power one. This could give us up to 40nW of power through the collimators in free space. This would ensure maximum power, which could hopefully give us conclusive results in the lab.
Since the system is now fully set up and stable, the only thing left to do is take a lot of data in lots of different ways and look for a pattern which could give us the end result for our project.
First, after switching to the high power laser, it is worth noting that the absorption spectra now looks a little differently, it is not linear anymore. After the cuvette with distilled water was put in between the collimators, the waveform looked like this:

The high power laser output is now much more powerful and it scatters all around, hence this waveform. After adjusting the collimators, in order to get rid of the ripples on the right hand side of the waveform, the line was smooth and the system was ready for testing.
The first benchmark test was to determine whether we lose power over time, and if so, how much. I checked this by running the distilled water 75 times at our usual settings, High 1 sensitivity, 2 nm resolution and 15 average waveforms, to determine calibration. Though the spectrometer only has 7 trials at a time, Jonathan installed the Keysight program on the computer in the lab and I was able to write the program which Ezequiel perfected. So, getting 75 iterations was an easy data collection, but it took some time. However, results were, unfortunately, inconclusive, and need to be further discussed and we need to figure out which steps we will be taking going forward.
The next thing done was getting 20 iterations of each of my glucose solutions, including 20 trials of pure distilled water. The autocollect code implemented automatically saved the excel spreadsheet from the data form all 20 trials and it automatically did the plotting analysis. We are going to discuss the results obtained from this data collection run and see what is next for us. This will probably entail to keep taking data and look for a pattern.
Plans For Next Week:
Keep taking A LOT more of glucose data
Find pattern while doing so
Finalize tolerances
Ezequiel Partida
Topics covered: Code Optimization for Complete Automated Data Analytics, Non-Linear Optical System Data Analysis
Materials used: Matlab
This week, my primary main concern was to finally update our data acquisition Matlab codes so that they can be set up as an automated data collecting and analysis system. Before this week, we had 1 code that collected data, 1 code that analyzed the data with error and subtraction, and 1 code that analyzed all data together. However, we had to run all of these individually, and also save plots and look at plots manually. This week, I spent a long time trying to embed all of these functions together, in order to make it easier for us. The reason why this was on top of my priorities is because Jonathan finally set up the lab computer to work with the GPIB. I want to be able to have easy to use, and convenient code for the years to come.
The function that I created is shown below in Figure 1. Embedded functions are shown in Figure 2 and Figure 3. It can do the following.
Auto-Collect data from the OSA.
Save all collected data in a spreadsheet in a controlled directory for the current date.
Save any corresponding and significant plots in the same controlled directory.
Can distinguish between data taken for Calibration, Water Solution, and Glucose Solution.
Perform 1-4 autonomously, by simply running the function



Figure 3: Helper Function for Analysis of Glucose Concentrations
An example of how this new set of code can perform can be shown by data that Tamara and Jonathan took this week. After Dr. Asghari helped to align our optical setup with a new non-linear fiber for higher power output, glucose and calibration data were taken. The automated function is called AutoCollect_Glucose(). Tamara and Jonathan used this code 7 times only to do the following.
AutoCollect_Glucose(‘Cal’, 75). Take 75 trials of water data to observe the stability of the new non-linear system over a long period of time.
AutoCollect_Glucose(‘Water’, 20). 20 trials of water data for reference.
AutoCollect_Glucose(‘G10mg’, 20). 20 trials of 10 mg/dL glucose solution.
Step 3 repeated for 70, 100, 140, and 300 mg/dL glucose solutions
All of the results of running the function only 7 times are stored in the controlled directory as shown below, in Figure 4.


Since the alignment of our system happened late in the wek, Tamara and Jonathan were only able to take data once this week. Also, making solutions, taking data for calibration, and taking all types of data takes hours, so time is a factor. Therefore, we only have one set of data to analyze this week. Below, Figure 5, shows the stability of the system after 75 trials, with a zoomed in version on Figure 6.


By looking at the error points above, it can be seen that the smallest error that we got for 75 controlled trials was around 0.1 dBm. Before, we were able to get around 0.009 dBm difference, so this week we basically have a system that’s 10 times less stable. This does not look good. However, we continued to take data because we wanted to see if the larger bandwidth maybe saw something different from previous weeks, disregarding noise. After taking data for all concentrations, the subtraction spectra plot is shown on Figure 7.

Figure 7: Subtraction Plots for Glucose Concentrations
As it can be seen above, there is no real conclusion for the subtraction spectra. There is too much noise, the plots are not following any sort of trend, but worse of all we have not seen this before. Nonetheless, this can be explained by the stability of the system. For example, for 100 mg/dL glucose data, at 1553 nm, the error is around 0.017 dBm, which can be seen in Figure 8. However, for water at this wavelength, the error is around 0.04 dBm, more than twice as much. Additionally, for the calibration plot in Figure 6, the error was around 0.1, which means that over time the stability gets worse. Therefore, we do not have a stable system as of now. We need to figure out how to reduce the error in our system, and then go full speed with data collection to see more trends.

Plans For Next Week
Discuss Results
Optimize System Alignment for Stability
Data Collection, ALOT, ALOT, ALOT
Jonathan De Rouen
Topics covered: Communication with the lab computer and the spectrometer. Optical alignment
Materials used: Optical Power meter, Keysight 2017, Matlab, Yokogawa AQ6370D spectrum analyzer
The focus for this week was to automate the system again to improve data collection, and to introduce a high power source for our system. I got in contact with Peter Alison and we found out the installation of the Keysight program was incomplete. It was missing two files that are necessary for the program to access its device manager. We copied the files off of my computer and managed to get the computer in the optics lab to interface with the spectrum analyzer. The ultrafast photonic vibrometer group also confirmed that they can use the codes they wrote to interface with the oscilloscope as well.
Another aspect we worked on was alignment of the high power to the optical system. With the help of Dr. Asghari and another grad student we were able to get 11mW in free space and over 2 uW through our sample. Also with Dr. Asghari’s help we determined a new way of calibrating our system to reduce the ripples on the OSA, we will adjust the collimators and not the rotational stage.
I also assisted Tamara in gathering data this week, we decided to do the same approach as before to gather data to observe what type of effect the higher power would have on the system.
For next week
Gather as much data as possible
Determine at what Wavelength the samples are consistent and accurate relative to each other
Comments