On-Demand | One Tool for Stimulus Presentation, Eye Tracking & Physiology Data

BIOPAC citations are continually updated—current count is 50,900, search publications here.

BIOPAC now offers full integration of newStimulus PresentationEye Tracking, and Physiology data acquisition and analysis in AcqKnowledge software. This combination of tools, integrated into one solution, allows researchers to easily manage all data collection and analysis in a single application. BIOPAC hosts an online presentation to review this new functionality and show you how to seamlessly synchronize your areas of interest (AOI) as well as, stimulus presentation, eye tracking, and physiology data.

The talk focuses on optimizing setup, highlighting the benefits of a fully integrated solution, and how to extract eye tracking and physiology metrics from key points such as the cumulative average skin conductance level while a participant is focused on AOI. We also show how to trigger other devices such as scent delivery systems and electrical stimulators when participants hit certain AOI.

Products Featured:


0:01-2:12 Greeting

2:13 This webinar is going to be introducing and showing the integration of eye tracking with stimulus presentation and the collection of physiological data acquisition and analysis.

2:32 And before we get started, most of what I’m going to be showing you today is live. I’ve got one little video clip, which sort of it makes it easier for me if I can show certain things on video. But for the most part, I try and do everything live.

2:50 Before we get started, I wanted to provide a little bit of background information about BIOPAC.

2:55 BIOPAC develops high-quality, scientific tools that allow you to measure physiology anytime, anywhere, with any subject.

3:06 We’ve been in business for 30 years, and 99% of the top 100 universities use BIOPAC products.

3:16 And BIOPAC has been cited over 40,000 times, and the Biotech Student Lab system is ranked number one for physiology experiments by members of the Human Anatomy and Physiology Society.

3:31 So, as I mentioned, we’ve integrated eye tracking with the AcqKnowledge software so that users can collect physiological signals and eye tracking data from the same subject in one-easy-to use application.

3:47 The demos today that I’m going to be showing you and the setup and everything, are all based around the MP160 system, but for those of you that have MP150s is it’s just going to work exactly the same.

4:00Then as far as the physiological data that’s being collected in the examples that I show, that was collected using the D Series smart amplifiers, but it really doesn’t matter what amplifier you have.

4:16 We have tethered amplifiers, the C Series, which are older. Probably many more of those around than the newer D Series.

4:27 Or the BioNomadix, which is the wireless option. So, that just kind of affords the participant a little bit more comfort.

4:35 They don’t necessarily have the cables connecting them to the recording device.

4:42 But, really, for the most part, when you’re doing eye tracking, particularly screen-based eye tracking, the participant isn’t moving anywhere anyways.

4:52 So, using a tethered system is actually not really a disadvantage. It may make the participant feel a little bit more comfortable, but at the end of the day, it’s really not a big consideration from a research data collection perspective.

5:15 And, as I said, these eye trackers that we’re talking about today are screen-based, so they fit beneath the monitor of a computer screen or on a laptop computer. And they basically look back at the participant.

5:32 And there’s a couple of light sources on there, which illuminate the pupil using infrared light. And then you’ve got the cameras. And dependent upon which eye tracker you have, there are different frequency ranges for the camera, so 40 frames per second all the way up to 200 hertz.

5:52 One of the most important considerations with these is actually the monitor size. These particular eye trackers are designed to work with, by modern standards, I would say, a smaller monitor. That’s why something like a laptop works so well. You know, most laptops are in that 15 to 17-inch range so a 20, maximum 22-inch display just fits that very nicely.

6:23 So, there’s a little bit of background information. The devices themselves, they connect to a Windows-based computer via USB. There’s no power required. The device gets the power from the computer that’s running it.

6:40 As far as the research is concerned, you gain the benefit of having something that is synchronized with the physiological data out of the box. You don’t have to worry about trying to align your data sets.

6:55 All of that is taken care of for you and the same with the stimulus presentation.

7:01 So, the stimulus presentation will automatically put event marks within the software. You don’t have to worry about setting any of that stuff up, which actually simplifies the process, makes your life a lot easier, and allows you to get to data collection that much quicker.

7:19 Then, of course, once you’ve collected your data, you get all the cool features: attention maps, heat maps, 3-D surface, luminance-type maps.

7:30 Obviously, you get the gaze path so you can actually track to see precisely where the person was looking.

7:39 In terms of the integration, one of the things that we’ve done, we’ve created these areas of interest. And now, I’m going to cover all of this in detail, but the areas of interest actually provide us with the opportunity to give you added functionality in a very easy-to-use manner.

8:00 So, if you’re looking at a…or a participant is looking at a particular object that you’re interested in, you can give beep to alert them or alert yourself.

8:14 You can actually trigger a stimulator.

8:18 So, if you want to provide some form of stimulation that could be haptic, thermal, electric, we’re also about to introduce a new scent delivery systems.

8:33 So, anyone doing olfactory stimulation can trigger different scents from the participant looking at a particular part of an image or when an image is first presented.

8:49 And then, at the end of that, once you’ve completed, not only do you have all the visualization and everything else, you also get some really great, you get a very detailed report coming back from the system.

9:03 So, there’s a lot of information that comes from both the eye tracking, and then when you bring in the physiological data as well

9:13 It really provides a rich summary of everything that the participant was sensing and seeing during the particular presentation.

9:24 So, as I mentioned earlier, you’ve got two light sources that are shining back at the participant and then we’re looking at the reflection, the glints. And I’ll demonstrate that in one of the examples that I show.

9:42 One of the common questions we get asked relates to can the participants wear glasses.

9:49 And you’ll see in the presentation that I quite often wear reading glasses and I’m able to use the system quite nicely with just regular readers.

10:01 But if you’ve got hard transition bifocals, those can sometimes cause additional reflections, so that can be a problem.

10:12 So, you may want to have some pairs of cheater readers for your participants.

10:17 And then they can just borrow those while they’re performing the study, and then hard contacts can also cause problems. So, these are just, you know, a couple of things to be aware of. The general sense is glasses aren’t a problem but, as always, the devil is in the detail.

10:37 And then another question that comes up quite often with the eye trackers relates to the infrared light that’s used. And the trackers are using infrared in the 850 nm range, and that’s basically, you know, occurring in sunlight, and those levels meet the safety guidelines for IR. So, you know, your participants will remain safe.

11:03 It’s nonhazardous, and, you know, they’ll be completely unaware that the light source is actually shining at them.

11:14 OK, so let’s, I’m going to start with the stimulus presentation, which, you know, trying to work through this in chronological order, this provides you with an easier follow-along. So, let me just come over to AcqKnowledge.

11:35 Actually, I’m going to minimize this and start right at the beginning.

11:42 So, I’m going to launch AcqKnowledge.

11:48 And I’ll change my display as soon as it comes up.

11:56 OK, perfect.

12:01 And I’m going to get out of here. OK, so now.

12:13 There we go. So, for those of you that are familiar with AcqKnowledge, this basically is the launcher. This is where you get to open your existing files, create new templates. With the latest version, we’ve got these new licensed features for eye tracking and stimulus presentation.

12:35 You can see there’s a new radio button option at the top, and this allows you to come in and create your own stimulus presentation.

12:46 So, I’m just going to create a new one, and we’ll walk through how we set everything up.

12:58 So, the way this dialog is set up on the left-hand side of the display you have the details of the individual stimuli that you’re going to use for your presentation. In the center, we have a preview of any stimuli that you’ve selected. So, if it’s an image, you’ll see a picture of the image.

13:22 Now, on the right-hand side, we’ll see the properties for that particular stimuli. Along the top here, we’ve got a series of icons.

13:33 At the moment, there’s really only two that are active to save and add the stimuli.

13:41 I’m just going to start right at the top, which is, I’m going to pick a file.

13:50 And If I come into my computer, I’ve got a bunch of images already.

14:05 Where are we? Eye tracking and…

14:13 OK, so I’ve got a bunch of images here.

14:15 I can just take one of these images, let’s take a picture of the horse, and now we’ve brought in one image that can be presented. So, this is the preview. And we can maximize this out and make it a little bit easier to see.

14:33 On the right-hand side we get the details. And over here, we have the name, and you can change the name.

14:40 So, actually, when I ran the presentation, I’m going to show you when I created it, I did not change the image names and kind of kicked myself afterwards. Because it’s much easier if you just have a simple name in there saying “horse.”

14:56 It just, it’s easier to know precisely what’s going on. And those names are actually used in AcqKnowledge for the event mark labels.

15:06 So, having good names in makes a ton of sense.

15:10 I just brought the original name of the image in. And you can create categories of images for your categories of images.

15:20 And you can assign an image to a particular category that could be positive, negative, neutral, or you can create your own categories.

15:31 This becomes super helpful if you’re doing randomization, because if you want to present a series of negative image but you want them randomly presented, you can just select “Random” and then the group of images, and it will, the software will pick however many you want out of that grouping.

15:59 So, this is super helpful.

16:04 The duration is fixed and then you can adjust the time.

16:08 So, the default is 10 seconds, but you can come here and change this to whatever you want. So now, this image will be presented on the screen for seven seconds. Beneath that, the system will always send an event mark.

16:24 So that means in AcqKnowledge, and you will see this when we run the experiments, you’ll have an event mark at the points the horse is presented to the participant, and the event type is stimulus delivery.

16:39 That’s the default, but BIOPAC has a lot of different event types built in.

16:44 And you can pick a different one if you prefer. Word of advice that stimulus delivery is used as the event type for some of the automated analysis routines. So, it kind of makes sense to stick with that. But if you want to do your own thing, you can do that.

17:05 And then you can control the background color, because more often than not, your images won’t fill the entire screen and you’ll have a region around the side, top, and bottom that you want to control. So, you can come in and you say, OK, well look.

17:22 By default, I would prefer to have a black background for that image, or you can pick any color you want out of the palette. So, you’ve got a little bit of control there. And then down below, you’ve got some controls for scaling fit.

17:42 So, if you remove that, it will revert to the full size of the image. It’s not going to scale it to fit on the screen. So, in most cases, you’re going to want to scale to fit. And the other important one is maintaining the aspect ratio.

18:02 Now if you turn this off on some of your images they get completely distorted.

18:07 So again, these two controls just allow you to set things up very nicely.

18:15 Now we can automate some of these options, as well. So, I just brought in one image, but you can come in and just grab a whole bunch of images.

18:29 Like so. And it will bring all of them in together.

18:34 You can make controls. So, if you notice, the default I mentioned earlier is 10 seconds, but we had set this image to seven seconds. So, if I want to adjust all of these and make them all seven seconds, I can just come in and change this to seven seconds, and now, each of these images now reflects that change.

19:02 OK, so, it kind of makes things a little easier for you. The other thing you can do is, if you know you’ve got a folder of images that you want to use, you’re already organized, can literally just point to the folder, and the system will bring all of those images in for you.

19:21 OK now, beyond that, you can add text, so you can display a message on the screen for the participant, images, which is what we’re doing.

19:32 You can bring in video, PDFs. You can present images, side by side, and then random, which is what I already mentioned. And then, the system will just randomly present images for you, OK? So that’s bringing in the individual stimuli. Now, we can set a sequence.

19:56 So, at the moment, we see on the right the list of images. And this is where I mentioned earlier, the names, can be really helpful.

20:05 Like, obvious names, because I don’t know what these names relate to, but we can just come over and bring these images in. Like so.

20:21 And from there, we can do a dry run.

20:28 There’s an option over here. The little triangle now becomes active. I hit dry run and the system will just run through exactly how we’ve got it set up.

20:39 So, this horse is being presented for seven seconds.

20:44 And we’ll just step through the entire recordings. I’m going to interrupt that for a moment. I use the Escape key; it just backs it out. If you want to change the sequence of these.

20:57 If you look at the bottom of the screen, if you want to move this image up to the top, you can do that.

21:06 Or if you want to do the opposite.

21:08 If you want to move one down, you can get your sequence exactly the way you want it. Test everything. Make sure the timing is correct.

21:18 And then everything is ready to go. More often than not, you’ll have a target before an image is presented, then the image, then a target.

21:29 The target is designed to focus the individuals’ attention to a certain point on the screen, so if you put a cross in the center, the participant will then look at that part of the screen for you. So, you know their attention is right on the screen. And when you play your image, you know that every participant should be starting from the same point.

21:58 OK, so that’s sort of creating a particular presentation.

22:06 You can come over to the Save option.

22:12 You can put in Save, and we’ll just call this “webtest.”

22:19 And now we’ve created and saved an actual presentation.

22:28 Pretty easy. You’ll notice that there’s this third tab over here.

22:33 This is setting everything up to do a recording and do area of interest, et cetera, et cetera.

22:43-23:05 Poll

23:06 While you were representing, Anthony just asked, “Can you use videos in the presentation designer as well, you did just show that you can import videos.”

23:14 So that answers that question. “How easy is it to integrate, how easy is the integration, and is there one output file?”

23:30 Well, there are thus essentially two output files, thus the AcqKnowledge file, which kind of contains everything.

23:39 But there’s also the stimulus presentation file. But in terms of outputs, you’ve got the physiological data with the eye tracking information, and then you can output your report from that. But I’ll cover that in a little more detail as we move forward.

24:00 OK, great. Alright, so, it would love to just have…yep. OK, so the poll results I’ll just go ahead and read, Colt, if that’s OK. About a quarter of the people are “yes” that they are using screen-based about a quarter of the people are “no.” And then about 50% of people are considering for the future.

24:21 So, great. So, hopefully, they’re learning a lot today about how to do it. Alright, thanks, Frazer. Back to you.

24:27 OK, thank you.

24:32 Give me one second to just get organized.

24:52 OK, perfect. So, this is, this is not the?

24:56 Maybe it is.

25:05 Yeah. This is a different presentation. But, no, it’s not the same one.

25:11 So, what I’m going to do now is show you how we add the Areas of Interest.

25:18 And if you look, I’ve gone into this data recording tab and I can open a graph template, and we’ll get into that in a moment.

25:28 We can enable the eye tracking. First of all, I have to come in and add a template.

25:57 Well, I’m not going to worry about the template. I’ll just edit the current setup. So, we can enable and disable channels, turn equipment on, et cetera, et cetera.

26:21 Close this for a second, make sure I’m connected to the right one.

26:55 I was going to turn a module on, for example, just to get us going here.

27:04 Now, we’ve got, in theory, we’ve got that the system set up.

27:11 I can now come in and use the Area of Interest Editor, and I’ve enabled eye tracking. And this is the presentation that we were just looking at. And I can come in here, click on one of these, and you can see. Actually, this wasn’t the one we were looking at.

27:31 You can see there’s already some areas of interest in this particular presentation.

27:41 Now, the way we add an area or an area of interest, we put these tools along the top.

27:49 You can just pick an area like so, and then you can label it. So, put a label in there. Now, you have some controls.

28:03 And these are things that are super compelling because if the person dwells on this area of interest for a defined time period, user-defined time period, this is set to 500 milliseconds, but it could be longer than that.

28:22 You can move on to the next stimulus, or you can give a beep, an alert. You can start the stimulator, you can stop the stimulator, or you can set a digital output to control another device. Or you can just leave it to “none.” And all we’re doing is, you know, tracking when the person enters that particular area of interest.

28:47 And you can have as many areas of interest as you want on any of these images.

28:54 So, something like this, you could come in and just, you know, if you really cared about each of these little, some kind of ball or bead or something, you have a free hand tool so we can get a little bit precise about.

29:31 All I’m doing is I’m just moving the mouse along and clicking at points where I want to control the divide, the area, double click where it ends at.

29:47 I can put in a label.

29:50 And now we’ve created a label for the syringe, an area of interest for the syringe. And then we could maybe have a grouping.

30:03 Like so. So, it’s pretty easy to come in and create these. First step is to create the stimulus presentation, and then the second one is to come in and create your areas of interest, and then once you’ve created your areas of interest for each of your images and you save that, you’re then pretty much ready to start recording your data.

30:34 And, you know, I sort of wanted to break it up into two steps, because you don’t have to perform the areas of interest at the same time that you’re doing the stimulus presentation. So, trying to create that workflow to give people a better idea of how everything ties together.

30:55 So, at this point, we have everything setup, really. We’ve got the stimulus presentation, the areas of interest that we’re going to use for eye tracking.

31:08 We’ve turned on the MP 160 and I just enabled one channel. But other than that, we’re pretty much ready to start the recording segment.

31:21 So, I’m going to hand back over to Brenda for another poll, and then we’ll get into running an experiment and actually seeing what the participants are seeing based on, you know, the stimulus that’s being presented, the stimuli being presented.

31:40 So, Brenda, I’m going to hand back over to you.

31:42 OK, great. Thanks, Colt for launching that poll. Alright, so we did have a couple of questions come in and Robert asked, “For repeated stimuli, does each need a different name or how is that setup?”

31:56 Yeah. You can set them up with different names or you can repeat.

32:00 Actually, one of the things I didn’t mention in that one column of the individual stimuli, there’s a number next to it.

32:10 And the default is one, you can put that to number two. But if you want to sort of randomize things, you can, set them up with different names.

32:23 So, it just depends on how you want to do it, but you can absolutely repeat, and the easiest way is just to increase the number of times within the actual Stim Pres setup.

32:37 OK, alright. Well, great. So, thanks. Thanks everyone, for participating.

32:44 It looks like about half the people are using Stim Pres.

32:50 A quarter of the people aren’t, and a quarter of the people are considering. Alright. Well great, Frazer, we’ll get to other questions that came in, so.

33:01 Perfect

33:03 Just one thing I was just kind of quickly, just go back in there.

33:18 Sorry, I said under the Stimuli tab, it’s under the Sequence tab. You’ve got this column here, Repeat. So, all you do is just come in and increase this number.

33:28 So, pretty easy to do that.

33:33 OK, so now rather than try and juggle many things in the room at the same time, got a little video here that will walk us through actual setting up and recording of the data. So, this screencast actually is presented from the participant’s perspective. But I’ll also sort of explain what the technician is saying at the same time.

34:04 OK, so at this point, we’re ready to start recording some data. Coming to the MP160 menu, to make things a little bit easier, there’s a little wizard here that allows you to enable the eye tracking signals that you want to capture together with the physiological data. So, if you remember, earlier I added ECG and heart rate that you can just come down here and pick any of these eye tracking measures.

34:34 And you’ll have these channels included with the physiological data as well. So, I’m just going to hit next.

34:46You can choose the fixation algorithm, either dispersion velocity, through standard options.

34:56 By default, the dispersion is the macro that we use.

35:04 Going to include the area of interest calculations as well.

35:08 So that will give us event marks, like trigger marks, when an area of interest is hit.

35:32 Drives the experiment. This is the technicians view that we’re looking at the moment.

35:36 First thing we need to do is determine which monitor we’re looking at. I’ve only got one monitor, but for a typical setup, we would have two monitors, one for the participants, and one for the technician.

35:48 This part here would be on the technician’s view. If I click Identify, this green screen identifies the monitor is actually going to be used for the presentation.

36:04 If we had a second monitor connected, you would see display two labeled here, so you can toggle between the two.

36:13 The eye tracker, there’s some controls here in terms of calibration.

36:18 We default to a five-point calibration, and you can do a 9 or 16 point. The way these work, you’ll see green dots appear on the screen with a cross in the center of it, and the participant just has to fixate on the green dots, and then the calibration automatically moves on to the next point. Obviously, it will do that five times. With five all the way up to 16 you’ll have 16 green dots appearing on the screen one at a time but going through a sequence the person must follow.

36:59 Padding, Video, Compensate. Padding will eliminate the blinks from the eye tracking data, makes the data look nice and clean, but you actually lose the actual blink information, which can be useful. It’s basically, you have got to drop in the eye tracking signal each time the eye tracker loses the ability to see the pupil, so that could be from a blink or maybe the participant looking away.

37:29 Video allows the technician to have a view of the participant’s eyes during the experiment.

37:38 OK, so now, when we hit the green triangle at the top, that will start the experiment. So, I’m sitting down in front of the computer screen. I’ve got the eye tracker looking back at me.

37:51 There are two green LEDs that indicate that the device is able to see both of my pupils.

37:59 When I hit Launch, we’ll now see an image of my eyes, OK.

38:06 Now, if you look carefully, even though I put my glasses on, there are in fact three crosses per pupil, one large one that’s identification of the pupil. Then down below, if I move my head, you can probably see there are two glints. There are two crosses indicating the system can see the glints.

38:27 Now we’re into the calibration sequence, and I’m just looking at those green balls with the red crosses.

38:39 And once we’re done with that, the system will provide a summary. It gives an indication; these little hourglasses show us that the system is tracking my eyes pretty accurately in these areas. So, I’m just going to hit Accept at this point, and we’ll move on with the experiments.

39:04 So, first image. OK, there was the beep. I was looking at the insect.


OK, I’m now going to look at spider. And there’s the beep.

39:35 OK, now I’m going to look at the worm. And there’s our beep.

39:43 OK, now perfect. So now let’s look at our data.

39:52 I’m going to turn some of these…

39:57 OK, so that’s sort of actually running an experiment from the, or actually both sides, really. There’s the setup, which would be what the technician is doing.

40:11 And then, actually seeing what the participant is seeing. And as I mentioned in that video, having two monitors is quite typical for these types of experiments.

40:29 It’s definitely helpful because the technician wants to see precisely what’s going on. And you can also look at the physiological data at the same time. You can see the video of the eyes, or we can also show the actual video of the participant.

40:47 In a moment, I’ll give you an example of that as well when we get into looking at the data that was collected.

40:55 So, there was something that I wanted to mention. I’m a big proponent of videoing your participants if you can.

41:01 There’s so much information that you can get during a particular experiment. So, not only do you get the eye tracking information, the physiological data, but also just having a good understanding of precisely what the participant was doing at any given moment.

41:20 Even if it’s for sanity checks. You know, if you see something strange in the physiological data, you can jump to that point, and it will advance to the corresponding time points in the video. And you can see, you know, maybe the person was scratching, yawning.

41:41 Maybe they were looking away. There’s all manner of different pieces of information that can be gained just from the video not to mention the actual behavioral idiosyncrasies that occur.

41:56 So anyway, that is basically running an experiment.

42:00 So, we created the presentation, marked the areas of interests that we were concerned with, and then we actually saw how a participant would interact with the system and how the technician runs it all. So, I’m going to hand back over to Brenda for our next audience poll.

42:28 OK, great, thanks. So now we’d like to know if you’re interested in combining all three, which I’m sure you are because you’re attending this webinar, but it’s always good to hear.

42:41 OK, so, you know, Frazer, we have a bunch of questions from people about doing this in the MRI.

42:49 And I know that this is screen-based eye tracking, and it cannot be done in the MRI. If you have any ideas for eye tracking that could be done in the MRI, or do you want to handle that later in the Q&A?

43:02 There are specific eye trackers for MRI that are MRI-compatible, but these definitely are not.

43:11 These are, as I mentioned earlier, they’re design to be placed beneath the actual display that the participant is seeing and more often than not, with the MRI applications, the image is presented onto a screen.


The participant is looking quite far away. And one of the other things that I didn’t mention, which I should have, came up in the specifications. And that relates to the virtual head box. And the virtual head box is the area of the participant needs to be located in relative to the eye tracker cameras.

43:54 And on the front of the display, you have the ability to see two LEDs that indicate whether the person’s eyes are visible, and that virtual head box, if you’re in an MRI and images being projected onto a screen, you would just be too far away from it.

44:21 OK, great, so we closed the poll. We had lots of people interested and combining the tools, so great. That’s great news.

44:32 And your audio just suddenly became a little gravelly.

44:38 So, we may give you some, we may jump in and let you know if it continues to be a little funky or if it cuts out. So, I guess we’ll go back to you, but we are switching to your other computer, right?

44:53 Correct. And there we go. Perfect.

45:06 And you can put your webcam back on if you want.

45:10 Yeah, that should just be coming up. Or not.

45:19 Yeah, it’s a little slow. Oh, it’s flat.

45:24 Well, don’t worry about the webcam, you can just turn it back on when you go back to your other computer, OK?

45:28 Yep, OK, so.

45:33 This is kind of the fun part. This, when we ran the experiment earlier, this is a slightly different presentation, I set this one up yesterday.

45:47 On the left-hand side, you can see the physiological data and the right-hand side, you can see the images that were presented to the participants, and when you move around in the recording, naturally.

46:14 One.

46:18 Right one, OK.

46:19 And you move around in the recording, you’ll advance any of the images.

46:26 So, all I’m doing is I’m just clicking on the physiological data.

46:32 For those of you that are familiar with AcqKnowledge, this the cursor tool, the pointer, and these event marks, and I’m just going to zoom in a little bit, simplify. Actually, yeah, the zooming in won’t really help.

46:52 If you look along the top, there’s this white bar, and there’s lots of little icons in there.

46:57 These little icons, and I will expand this out a little bit so that we’re looking at more data.

47:06 Thank these little icons. Remember, during the presentation, I said BIOPAC defaults to stimulus delivery as one of the events. This is what I was talking about.

47:19 This is an event mark that came at a point the image was presented to the participant.

47:21 I just clicked on that, its now red. And that information actually comes through automatically. You don’t, from a researcher’s perspective, a technician’s perspective, you don’t have to think about it. The software is just going to be presenting these markers at key points.

47:52 And those key points can be, in the case of stimulus delivery, you get the light bulb.

47:54 Then there’s other labels coming in here that are telling us when the participant hit an area of interest, and in this case, it was the ball.

48:06 And then we hit another one over here. The person hit the ball again. So, we get a lot of additional information coming in.

48:16 Not only do we know when the image was presented, but we also know exactly when the person looked at that area of interest.

48:31 On the right, this is just showing you the gaze path.

48:36 And again, as I mentioned, you can start at the beginning.

48:42 You can sort of think about this almost like a video file. Down in the bottom there, you can see there’s a green starts indicator or play button.

48:52 I hit Play, it’s just going to play it back and if you look at the data in AcqKnowledge you’ll see the cursor moving across the screen and you’ll see, the way I’ve got it set up, we’re picking out the fixation points and tracking where the person was looking.

49:16 And it sort of pauses, the cursor pauses for a second at each of the stim delivs.

49:38 OK, so you can play these back, you can highlight just an area that you’re particularly interested in, and you can see what occurred during that period of time, if you really care about the last few seconds.

49:54 You just highlight that area. Oops, I can manage my mouse a little better.

50:02 OK, you get to see what was going on just in those last few seconds. And you can control how long the tail is.

50:13 So, coming into, we look at this toolbar along the top.

50:18 What I’m showing you is the gaze path, which is this first icon here. And it opens up on the display below.

50:26 If I open up the controls, we can set whether this is a constant or a raindrop, the transparency so you can make it more invisible. You can add or remove the gaze path so if you don’t necessarily want the line drawn between them, you can adjust the length of the tail.

50:55 So, by reducing this down, when you update it, you see less data on the screen.

51:02 So, now, if I close this down this way. I close this. Now, we’re just seeing much less data.

51:20 So, there’s all kinds of controls that you have within that. You can also change this to a heat map.

51:34 There, so we got the heat map, and again we’ve got controls for those where we can increase and decrease the intensity.

51:43 We have luminance display, which kind of just highlights the areas that are relevant.

51:52 And again, we can control those.

52:02 And we can go all the way down. OK.

52:09 There’s also a 3-D image. So, this sort of lays the…it’s sort of like a 3-D heat map, almost.

52:20 And you can control the position of everything, so that you get the view that you want in all directions, X, Y, and Z.

52:33 Dependent upon, I missed one, dependent upon the image. Let’s see if we can find them a little bit more interesting.

52:42 I think a lot of these only had one area of interest on there.

52:49 Yeah. The more areas of interest, you’ll just see this divided up into different regions, but then, you know, I selected the head of the snake as my area of interest.

53:01 It gives me the time On. It gives me the time of the first hit, and it gives me the total number of fixations, just for that area of interest.

53:12 I bypassed this one. This gives us the statistics over the area of interest, and now you’re not guessing. And, obviously, the more areas of interest, you could have one on the person’s nose, in this case.

53:26 Maybe make this a little bit smaller. You could be looking at the eyes but get a lot of good rich information there.

53:34 One of the nice things about this is we get the chance to bring in additional information. This little sigma icon, on the side there.

53:46 If I move this down… There we go… And I come into this control again.

54:05 I can add a measurement, so I’m just going to add electro dermal activity.

54:13 I’m going ahead, OK. And now, the system is telling me that the average skin conductance level for the time the person was viewing the area of interest was actually eight microsiemens.

54:34 And the way we do that is each time the person dwells on that area of interest, we track the skin conductance level in this case, and then, over the entire time period that the image was presented for, we provide the average for all of those values.

55:00 So, it’s kind of a nice metric. It would be difficult to do that manually. So, you know, it’s one of the benefits of having everything fully integrated. It’s kind of nice to be able to get access to this data very, very quickly.

55:19 And then we’ve got some other displays.

55:23 Again, if you’ve got more areas of interest, these become a little bit more compelling. And I’m going to close this out and just open up another file.

55:44 Alright, so, I know someone asked previously, I think at the point of registration, can you video the participants at the same time? Well, this is using the media functionality within AcqKnowledge.

56:00 So, for those of you that aren’t familiar with this, you can literally plug in a USB webcam.

56:07 You can point the webcam at the participant. Using the media functionality, you can set it up and you can access the playback viewer.

56:16 And no matter where you click in the recording, you’ll get a record of what the participant was doing.

56:21 Well, I mean, actually, this is not that dramatic because our participant is actually just staring at a screen, but, you know, if you look carefully, you can see, she’s moving very slightly. Her expressions don’t change too much.

56:37 She’s obviously a great poker player, not displaying her emotions, but this is available while we’re doing the eye tracking and analysis and we can come in. This is how we get to that dialog. I was showing you previously. Show visualization viewer.

57:00 When I open that up, I size this, you can get these. This is where it helps when you do your analysis to have a nice big monitor. And we can open up.

57:13 This is the gaze path, OK? And when we jump to any particular areas, a picture of a spider, this is advanced, the video, or I can go all the way back to the beginning. I can hit Play.

57:30 And we get to see. Actually, I didn’t go back to the beginning.

57:37 Now we’re playing back the video and the images that were presented.

57:55 I’m going to turn the video off.

57:59 And I wanted to show some of these, because I believe, I’m hoping. There we go. There’s at least an example of two areas of interest.

58:10 So, we’ve got the eyes, and the pumpkin. And if we come over to this view.

58:22 We’ll see it all combined.

58:25 So it was, and actually, this is a really good example, because there was an area of interest created around the hat and there was no time spent on that. Didn’t meet our criteria.

58:40 For the pumpkin, there were 10 fixations. There was nothing on the mouth, and there are three fixations on the eyes.

58:51 Now, if we come over to this plot here, we have the option showing the background as well.

59:03 So, a lot of time was spent just looking at the background. The background is anything that isn’t an area of interest. And then you can see the path that was taken.

59:15 This image was presented for eight seconds. Then if you want to know the sequence, we have, this is the sequence going across the screen.

59:29 And then, once we’ve got all of that information, we come back. These can be pulled out. If you’ve got a large monitor, you can pull these out into their own window. So, you get a better view. You can look at all of these simultaneously at the same time. It’s a little bit difficult to do that on this size of monitor that I’ve got here, but it is possible to do that.

59:52 And then finally, you have the ability to present or extract some information.

1:00:01 So, we were going to get an area of interest summary, the dwelling sequence, and fixations sequence. I’m going to run this report.

1:00:13 This will calculate everything, and then it will open it all up into Excel.

1:00:21 It takes a little while.

1:00:25 A lot of stuff going on.

1:00:36 One of the things I didn’t mention earlier, while this report’s running, we look at the information on the screen. In this particular file top channel, this skin conductance level, then we have ECG. We’ve got heart rate, interval, then we’ve got X and Y position.

1:00:57 And you’ll notice these sort of drop down to zero.

1:01:01 These are eye blinks.

1:01:03 And I mentioned that in the video you can set the system up so will eliminate these.

1:01:10 But, from my perspective, I think it provides good information and you can always eliminate them afterwards. We’ve got tools in the software that will do that.

1:01:18 This is the information about the fixations, and then down below, these are the area of interest information, um, that’s coming from the software.

1:01:33 Now, one thing that I haven’t done, but you can come in, you can run the electrodermal analysis and do an event-related and the system. Here we go.

1:01:53 It’s taking its time. You can then do a full analysis based on the event. So, you can identify specific versus non-specific skin conductance responses as well.

1:02:05 So, there’s a lot of stuff that that can be done once you’ve got your data collected from the participant.

1:02:29 Frazer, are you there? We don’t have any audio right now.

1:02:33 Yeah, I was just pausing, waiting for the report to do its thing. OK, got it.

102:40 Have any questions at this point? Maybe it’s a good…

1:02:45 Yeah, while we’re waiting. That’s a good idea. So, Anthony asked a couple of questions. Do areas of interest work for the same, the same way for video stimuli as they do for images?

1:02:56 So, images, are there any, are there any differences? And then…

1:03:01 That’s a great question. OK, so when you run the video there, there is no way of applying areas of interest to the video.

1:03:12 You, basically, you get the, the path of the person looking at the, wherever they’re seeing on the video at that particular moment, whatever is being displayed. You also lose the ability to deliver events because we provide events.

1:03:31 At the point the image or the video is presented to the participant.

1:03:39 So, it’s not quite as rich with the videos. But certainly, you get an event market at the beginning, and you definitely get to track the participant while they’re watching the video.

1:03:52OK, and then related to that, the same person asked, “Do the heat maps and gaze plots differ from videos stim?”

1:04:05 Do they differ?

1:04:11 So, can you still get a heat map and a gaze plot on a video stimulation: versus a…

1:04:21 It’s on a static area. It’s a little bit different, because obviously the video is moving through all the time, right? So, it’s not, it’s not quite the same as you get with the, OK, so, there’s that report. But it’s not going to be the same as looking at the static image.

1:04:45 OK. OK, so here’s the report that took quite a while. You get a summary of the file. BSP. This is the stimulus presentation extension. So, someone asked, what files do you get?

1:05:06 Well, one of them is the stimulus presentation file. So, in this case, we ran the stim pres Morgan2.bsp, and then the data that was collected was given a filename of Morgan test web.acq.

1:05:23 And then gives us information about each of the images that were presented to the participant and any areas of interest that were delivered as well.

1:05:38 So, highlight this.

1:05:49 So, I realize one of the things I didn’t do on this file, it was on the other one. I didn’t add in skin conductance level. But if we had skin conductance level, this would be another column over here.

1:06:01 And that will basically measure the average skin conductance level while the person was looking at the particular area of interest.

1:06:12 But, you know, you get the dwell time.

1:06:16 That’s the cross. Let’s look at something. Here’s an image here. OK. So, we get the background.

1:06:22 Food, we get different objects that were presented, the number of entries to that area of interest, the time of first entry, so the dwell time, the maximum dwell time, the mean dwell time, the number of fixations, the time of first fixation, the total fixations, and then the standard deviation.

1:06:46 So that applies to all the images, and it’s all presented in chronological order. Basically, the way the images were presented.

1:06:56 Then we get the dwelling sequence, so this is sort of basically the path that the person took.

1:07:03 So here, background, face, background, ball, background, background, target, cross, background, object, background. Not much there, it’s the cross.

1:07:20 So, Main skull, the background, and the background. And then we’ll get the fixations summary at the end.

1:07:29 So, this gives us the fixation sequence. You get the coordinates. The X and Y co-ordinates. You get the duration, the fixation duration.

1:07:39 And the area of interest and the target. So, this was the cross.

1:07:47 So, this would be for the face, the background, and the ball. So, this was the person holding the ball. So, you get a ton of information coming out of these reports. And.

1:08:03 You get a lot of useful ways in which you can present the data, you know, basically set it up the way you want to get the information you need from your participants. And it’s pretty easy to set up.

1:08:21 The stimulus presentation, easy to create, that. The event marks and everything are all automatic, they come through.

1:08:29 The analysis is pretty straightforward. You know, the visualization tools are easy to use. You’ll also notice that in AcqKnowledge, we’ve got a toolbar here that comes up that allows you to jump to any of these tools, so it’s basically the same.

1:08:46 That’s what we’ve got going on over here.

1:08:50 So, you can jump from within AcqKnowledge without having to go over to this window.

1:08:55 It’s kind of nice if you’ve got two monitors, you can control everything there. So, easy to set up, and a pretty streamlined solution.

1:09:06 I think that pretty much concludes everything. I’m going to hand back over to you, Brenda.

1:09:12 OK, great, so let’s make your other computer the main computer because the webcam wasn’t working on the one that you were just on. Probably has to be turned on or something. And I do have some great questions here. We’ll start out with a question from Jeremy.

1:09:32 And aside from—there you go, I see, you now—aside from gaze tracking, does the tracker track pupil dilation as a graph in AcqKnowledge.

1:09:44 You muted your computer, that’s what I was trying to do. I muted it. Never mind. I got it. Oh, I just unmuted it and remuted it. OK

1:09:54 Alright. So, Jeremy asked, “Aside from gaze tracking, does the tracker return pupil dilation as a graph in AcqKnowledge?

1:10:17 OK, so, wait, can you just read just talk for a second? You did the wrong…you give me the wrong one?

1:10:33 There’s an echo now. And I think we need to turn the volume down on one of your machines.

1:10:36 Just to turned the other one on. Should be good.

1:10:42Yeah, we’re good now. We get got it, I think.

1:10:44 Yeah. I’m just trying to give you that. There you go.

1:10:55 So, here’s pupil diameter.

1:10:59 Which I think is what the person is asking for.

1:11:04 Oh, I’m not sure which monitor you’re seeing, Brenda. We’re seeing the one with.

1:11:10 It says AcqKnowledge up on the screen, and it has Eye Tracker Wizard on the front with, that may be the video. Yeah, I think that’s the video.

1:11:21 OK, so you should be seeing the different channels that are available. So, pupil diameter is one of the options there.

1:11:36 OK, OK, great. Alright, and then Jorge’s asking, “Can you add a simple question while showing an image? For example, “Do you like the image? Yes. No?” Having the time to answer integrated with all the physiological data?

1:11:52 Yeah, there’s different ways in which you can do that.

1:11:55 Obviously, text is available. So, you can create text within there. You can add the question to the image. You can present the question after the images presents. It just depends on how you want to do it. So, the simple answer is, yes, you can do that.

1:12:12 And then, you can sort of do different things where, for example, you could have “Yes/No” on the screen, and you could create areas of interest for Yes/No and just have the participant look at the correct response, the one that relates to them. And then you’ll have a record of them hitting Yes/No.

1:12:36 Or you can have event button boxes, et cetera, et cetera. There’s all different ways in which you can handle that, but it can be done quite easily.

1:12:45 OK, great. So, let’s see here. A few people asked about EEG.

1:12:53 Can you collect EEG and trigger events with the EEG like fNIRS and other brain imaging products like that?

1:13:02 Yes. So, EEG no problem. You can bring the EEG in, basically any physiological signal coming into the MP160 or MP150 can be recorded. So, EEG is a good one, no problem there.

1:13:17 The fNIRS system, we can synchronize with that, and we can make sure that event trigger marks are going out to the fNIRS system as well.

1:13:35 OK. And we had a question about…

1:13:43 Brenda, I just want to, you know, I’m, what I’m talking about, sending triggers to fNIRS, I’m talking about the fNIRS devices that we sell. I’m not sure about third-party.

1:13:55 So, I just want to be very clear that I would imagine that most of these devices have the ability to receive triggers, but just to be transparent.

1:14:0 And then the stim presentation with AcqKnowledge is only available with the eye tracking. Right?

1:14:14 You’re not able to use that outside of eye tracking, the eye tracker?

1:14:19 Exactly. Yeah. Yeah.

1:14:23 I mean, at some point, we may broaden it. But at the moment, it’s really geared towards the eye tracking. And it’s sort of tightly integrated with that.

1:14:3 And then, can you use this with VR, simulations in VR and Oculus or other headsets?

1:14:44 Well, we have a whole webinar that we ran on virtual reality and eye tracking.

1:14:52 It’s a completely different setup for VR, and I would recommend watching that particular webinar, because it’s a very broad topic.

1:15:03 But we do have solutions for virtual reality and eye tracking through our partnership with WorldViz.

1:15:15 OK, Great. Alright, well, last chance for questions, everyone.

1:15:18 I know there’s still a lot of people on and I was taking all of the questions that we had, but I’ll do some closing comments here, and I’ll. I’d like to show a few things here at the end, since you have so many great questions.

1:15:36 We do offer other webinars. You just mentioned the other webinar about VR and eye tracking. We can’t go into a lot of detail there, so you can see all of our on-demand webinars available at the BIOPAC.com/webinars page.

1:15:53 And we have over 50 webinars, you guys.

1:15:57 So, there’s a lot of opportunity for learning here, and all different topics, all different signals, all different application areas.

1:16:06 It scroll down to Virtual Reality webinars, you can see that we have video and multimedia. All kids of stim pres videos. So, actually, somebody else asked a question about stim pres, like with other tools.

1:16:21 Can you integrate that with AcqKnowledge?

1:16:26 Or how does that work with other stim pres tools?

1:16:30 Well, yeah. AcqKnowledge interfaces with, you know, we actually sell E-Prime and SuperLab and both of those work very well with AcqKnowledge. If you mean specifically to eye tracking, you lose the level of integration.

1:16:52 There were certain things that we needed to do to simplify setup and everything for the user and that’s one of the reasons why we put the stimulus presentation into the system, was sort of allowed us to have greater control, but certainly integrating with SuperLab and E-Prime for stimulus presentation, no problem at all.

1:17:19 Great. So, we do have a couple of webinars coming up in August. One of them is about refocusing on the student. This is for people in the teaching industry.

1:17:28 Refocusing on how to increase student collaboration in a lab setting. Also, we have and AcqKnowledge bootcamp coming up at the end of August. So, we look forward to having you all in here.

1:17:41 And let’s see here. Oh, I want to just show one other thing. I don’t know if you guys can hear my dog in the background. He’s trying to get into the room. For those of you who are setting up experiments and research projects, you can have, you can reach out to our support. You can just email support at BIOPAC.com or call us.

1:18:01 But we have lots of great resources here for you, and we have a great team that helps you figure out the best way to set up your experiment if you’re new to it or to troubleshoot issues that come up or questions that you have.

1:18:15 So, feel free to reach out to us on that. You can also, if you want to, we will have a salesperson contact you.


But if you want it to reach out to your salesperson right away, you can just go to any page on the BIOPAC.com website, and you can scroll down, and you’ll find out your local sales contact, Mine is Amy because I’m based in California, but you will see your local representative here or the company that is representing your country.

1:18:43 So, definitely feel free to reach out to us that way. You can just as easily fill out this form, or call. There’s a number and an email usually that’s available here, but you can fill out that form. Easily.

1:18:57 OK, let me just check in and I don’t see any other questions, and I guess we are, we covered it all, Frazer. So, let me just take a minute to thank you for all the work you put together and for this presentation.

1:19:10 And it was very interesting, and I know the audience was very engaged and appreciated it, so thank you for that.

1:19:17 OK, thanks, Brenda. Thanks, everyone, for attending. And if you have specific questions, you always can reach out to us.

1:19:24 Yeah, And I just wanted to say that today’s webinar was recorded, we’ll email a link with slides and the Q&A document.

1:19:31 When you close the webinar window, please complete the survey. We’d love to have your feedback and ideas for future webinars.

1:19:38 And please do visit BIOPAC.com for screencasts, application notes, and information on future events.


New Citations | BIOPAC in Speech and Communication Research

Speech and communication are integral parts of human relationships and development As such,...

Join the BIOPAC Community

Stay Current

Stay Connected

Request a Demonstration
Request a Demonstration