My name is Erik Smistad and my research is focused on programs and algorithms that can automatically and quickly locate organs and other anatomical structures in medical images (CT, MR, Ultrasound etc.) for the purpose of helping physicians interpret the images and navigate inside the body during surgery. Currently, I am working as a research scientist at SINTEF Medical Technology and as a post doc at the Norwegian University of Science and Technology (NTNU).

Primary research interests

  • Segmentation and tracking of structures in ultrasound images.
  • Deep neural networks for medical image segmentation, object detection and classification
  • Segmentation and centerline extraction of tubular structures, such as airways and blood vessels.
  • Parallel and GPU processing.

For more details, see my Research page.

If you are interested in the same topics, please don’t hesitate to contact me at ersmistad@gmail.com

Me @ GitHub Twitter Linkedin ResearchGate

Tools I use

  • Ubuntu Linux as my main operating system
  • VIM as programming and latex editor
  • CLion/PyCharm as programming IDE
  • Git as my revision control system
  • CMake for cross-platform builds

Libraries and frameworks I use

  • FAST (Framework for heterogeneous medical image computing and visualization)
  • OpenCL (Open Computing Library)
  • OpenGL (Open Graphics Library)
  • Boost (C++ library)
  • Eigen (Linear algebra library)

23 Responses

  1. Beenish Aziz says:

    Hello Erik

    i am doing my PhD in computer Sciences on bio medical imaging. I have segmented coronary arteries after using Fuzzy C-means clustering Algorithm in Matlab. But for removing arota i apply code for detecting largest Blob area. But i have to hard code in this code for each image. Can you suggest that what should i do for automatically detecting largest blobs and can show my desires coronary arteries.

    % binaryImage = binaryImage > 200;
    binaryImage = imfill(binaryImage, ‘holes’);

    [labeledImage, numberOfBlobs] = bwlabel(binaryImage);

    blobMeasurements = regionprops(labeledImage, ‘area’, ‘Centroid’);
    % Get all the areas
    allAreas = [blobMeasurements.Area] % No semicolon so it will print to the command window.
    menuOptions{1} = ‘0’; % Add option to extract no blobs.
    % Display areas on image
    for k = 1 : numberOfBlobs % Loop through all blobs.
    thisCentroid = [blobMeasurements(k).Centroid(1), blobMeasurements(k).Centroid(2)];
    message = sprintf(‘Area = %d’, allAreas(k));
    text(thisCentroid(1), thisCentroid(2), message, ‘Color’, ‘r’);
    menuOptions{k+1} = sprintf(‘%d’, k);

    allowableAreaIndexes = (allAreas>300) & ( allAreas <1300 )
    keeperIndexes = find(allowableAreaIndexes);
    keeperBlobsImage = ismember(labeledImage, keeperIndexes);
    % % Re-label with only the keeper blobs kept.
    newLabeledImage = bwlabel(keeperBlobsImage,4); % Label each blob so we can make measurements of it
    imshow(newLabeledImage , []);
    title('Segmented Cornary Artries');

    please help me in this regard.


  2. KSSR says:

    Hey Erik,
    I am quite new to the world of Opencl. I am able to find videos in youtube, as how to use the visual studio for opencl, but I am unable to create new programs as in many videos they have pre compile dprograms with them. It would helpful, if u could give step by step procedure for writing my own code. I am using visual studio 2010 on intel I3 with no GPU. Please Help me.

  3. Y.S. says:

    Hi Erik,

    Thanks a lot sharing many opencl apps here.

    I am new to opencl. I have a general question for image3d_t. If I want to do gaussianblur twice for a volume, do I need to get back the data from gpu to cpu and call the opencl kernel for the second time. Is there a way that the intermediate 3D data can be still used in gpu without passing back to cpu. Otherwise, the data transfer process is too slow from cpu to gpu back and forth, espeically for nvidia card that needs to transfer from image3d_t to buffer every time.

    Thanks again.

    • Erik Smistad says:


      You don’t have transfer the data back to the CPU, but you have to call the kernel twice and use double buffering (because you can only read or write to a texture in a kernel). For NVIDIA GPUs you have to copy the buffer twice. However, this does not mean a transfer back to the CPU.

      So to sum up you have to do it like this:

      Image3D volume1 = your initial volume;
      Image3D volume2;
      Call kernel using volume1 as input and volume2 as output
      Call kernel again using volume2 as input and volume1 as output
      delete volume2
      volume1 contains the result

      For devices without the 3d image write ability (NVIDIA):

      Image3D volume;
      Buffer volumeBuffer
      Call kernel using volume as input and volumeBuffer as output
      Copy volumeBuffer to volume
      Call kernel again using volume as input and volumeBuffer as output
      Copy volumeBuffer to volume
      delete volumeBuffer
      volume contains the result

      • Anonymous says:

        Thank you for the suggestion. It works great by copying between buffer and image3d.

        For Nvidia card, is it better to directly use buffer instead of image3d, so we don’t need to do the copy between buffer and image3d. I guess the image3d access is faster than buffer access. But overall, which way is faster, I mean using buffer only or mixture of image3d and buffer by copying between them.

        Another question is that I got error when using sqrt() in opencl on Nvidia card. Did you have experience in this.

        Thank you.


        • Erik Smistad says:

          Which is faster depends on what you want to do.
          3D images have the advantage of 3D caching, while buffers only have 1D caching. This improved caching may hide the penalty of having to do a copy.

  4. Talita says:

    Hi Erik,

    I’m trying to compile the Tube-Segmentation-Framework code and I’m getting some errors like these below:

    /home/…/Tube-Segmentation-Framework/parallelCenterlineExtraction.cpp:727:46: error: no matching function for call to ‘oul::HistogramPyramid3DBuffer::HistogramPyramid3DBuffer(OpenCL&)’
    /home/…/Tube-Segmentation-Framework/parallelCenterlineExtraction.cpp:727:46: note: candidates are:
    /home/…/Tube-Segmentation-Framework/OpenCLUtilityLibrary/HistogramPyramids.hpp:57:9: note: oul::HistogramPyramid3DBuffer::HistogramPyramid3DBuffer(oul::Context&)
    /home/…/Tube-Segmentation-Framework/OpenCLUtilityLibrary/HistogramPyramids.hpp:57:9: note: no known conversion for argument 1 from ‘OpenCL’ to ‘oul::Context&’
    /home/…/Tube-Segmentation-Framework/OpenCLUtilityLibrary/HistogramPyramids.hpp:55:7: note: oul::HistogramPyramid3DBuffer::HistogramPyramid3DBuffer(const oul::HistogramPyramid3DBuffer&)
    /home/…/Tube-Segmentation-Framework/OpenCLUtilityLibrary/HistogramPyramids.hpp:55:7: note: no known conversion for argument 1 from ‘OpenCL’ to ‘const oul::HistogramPyramid3DBuffer&’

    Could you help me with that?

    Thank you,

    • Erik Smistad says:

      This has happened because you have downloaded the incorrect version of the OpenCLUtilityLibrary. GitHub is not so good with submodules. If you had used git directly (with git submodule init/update) you should have got the correct version. Anywho, you should download this version of the OpenCLUtilityLibrary: https://github.com/smistad/OpenCLUtilityLibrary/tree/73de6709a7e722ffba0523258418bf364c31c43a

      Let me know if that works or not.

      • Ieva says:

        Hi Erik,
        I faced the same problem and git checkout 73de6709a7e722ffba0523258418bf364c31c43a worked for me :). Unfortunately another problem occurred in build “/usr/include/dirent.h:353:5: error: reference to ‘size_t’ is ambiguous”. It seems that the error repeated in SIPL and OpenCLUtilityLibrary libraries. Maybe you have opinion how to fix this?

        Thenk you,

  5. mahboobeh says:

    Hi Erik,
    I am trying to measure diameter of coronary arterial ,then first i segmented one angiography image with level set method second i must extract centerline of segmented vessels with GVF. but i don’t know how to do it.
    I use Matlab software for my project.
    then if it is possible for you, can you guide me about that?
    Thnaks a lot

  6. Anonymous says:

    Hi Erik,
    I’m trying to get OpenCL run-time libraries working – so I can GPU-crunch for BOINC projects.
    I have tried Debian, Ubuntu, Kubuntu, Lubuntu, UberStudent (Ubuntu with xfce)
    and had no luck with the restricted drivers.
    BOINC gives the message:
    18-Mar-2013 21:47:26 [—] No usable GPUs found
    Boinc is looking for a run-time open CL

    Oh Yes, I have a ATI/radeon HD 7750.

    I went to the AMD website and downloaded the “Catalyst” driver for linux.
    This does not work because the openCL, while in “Catalyst” for Winows,
    is not in the “Catalyst” for Linux. The Linux “catalyst is the fglrx driver.
    I read forums – they said to get the openCL SDK. Did that. Didn’t work –
    as a readme in the download said the run-times are not part of the SDk as of version 2.8.

    I tried the SDK version 2.7. I get seg faults because you have to carefully match versions of SDK with openGL video driver.
    I really appreciate the Debian/Ubuntu package managers now.

    For a snity check, I tried the card on a windows bow the the windows “catalyst” driver (with the runtimes). W O R K S G R E A T.

    But, I just got a new box with better power and cooling and *really* wanted to run Linux.

    Any suggestians – or friends that tun GPUs for BOINC on linux I might ask?
    Many thanks,
    Jay E.

    MANY thanks to Vincent Danjean who wrote a wonderful article – explaining inter-dependencies at
    Slightly old – but a great, in-depth explanation.

    • Erik Smistad says:

      First of all, the OpenCL runtime is included in AMDs graphics driver. The SDK is only needed if you want to create your own applications that use OpenCL.

      Second, I haven’t used BOINC. However, my best guess is that BOINC is running as another unix user and thus don’t get access to X, the drivers, nor the GPU. I have experienced this myself when trying to run OpenCL with other unix users.

      Doing a quick google search it seems that it should be simple to run BOINC as yourself by changing BOINCUSER=boinc to BOINCUSER=yourusername in the configuration file. Or follow this guide http://www.spy-hill.net/myers/help/boinc/unix-personal.html to install and run BOINC as a personal installation.

      Another tip which have worked on NVIDIA platforms for me before is to run the command “xhost +” as the other unix user (boinc in your case). This command is used to give users and machines access to X on a given machine.

      Hope that helps. Good luck.

  7. aidonian says:

    thank you for sharing such knowledge
    i am grateful to what you are doing

  8. vipul jain says:

    hey i want to ask a question
    i want to use graphs into browser as a image…
    i am using matplotlib for that ….. but not gettin the results can u help me….

  9. ines says:

    Hi Erik
    Thank you very much for these interesting information.I have tried to implement your code but it doesnt work. When i execute a black windows is shown. my Os is windows Vista and my programming editor is visual studio 2010.Can you help?

  10. Dan says:

    Hi Erik,
    Thanks for sharing all these information. I have tried to use your code that deals with the Marching Cube Algorithm. It works fine with the raw files from the volvis website but if I try to use my own raw file (8 bit size: 512x512x163, spacing: 1,1,1) it shows not a 3D model but 2D slices positioned beside each other. Strange. Can you help?

    Thanks and Kind regards,

    • Erik Smistad says:


      Not sure I understand what is wrong. Have you remembered to change the size from 256x256x256 in the line: runMarchingCubes(parseRawFile(“aneurism.raw”, 256, 256, 256), 256, 256, 256, 1, 1, 1, 37.0);.. to 512x512x163

  11. vikas says:

    thanks for sharing your expertise. you are truly gifted with great talent and skills. keep posting..cheers

Leave a Reply

Your email address will not be published.