[ { "title": "Project Work 5 - Outlook and Conclusion", "url": "https://mt2-erlangen.github.io/conclusion/", "body": "Overview\n\nIntroduction\nk-Space\nImage Reconstruction\nFilters \nOutlook and Conclusion\n\n5. Outlook and Conclusion\nIn this project, you learned the concept of $k$-space and Fourier transform in MRI.\nNow you know the basic concepts of how MR images are created.\nIn subsequent steps, image processing can be applied to facilitate the diagnosis for the radiologist.\nThere is great research interest in MRI and various processing methods.\n\nName at least two examples of current trends in image processing in MRI. Provide citations for each example and describe them briefly.\nFor your examples, are they already applied to clinical routine? If not, do you think they soon will be? Try to explain why or why not.\n\nIn the last part, summarize what you have implemented and explained in your project report. Review the shortcomings of your approaches and how they could be mitigated in the future, and conclude your report.\nSubmission\nSubmit your project report as a PDF and your code as a ZIP file.\nYour project must compile as a whole!\nPackaging your code\nUse the provided script to create the ZIP file:\n./zip_submission.sh\n\nThis creates submission.zip which unpacks directly to src/. No extra nesting, no build artifacts.\nDo not pack the folder manually. A wrongly structured ZIP (e.g. unpacking to a project folder instead of src/) cannot be graded automatically and may result in 0 points.\nIf you choose to pack manually anyway, make sure the ZIP unpacks to exactly src/ and nothing else.\n" } { "title": "Project Work 4 - Filters", "url": "https://mt2-erlangen.github.io/filters/", "body": "Overview\n\nIntroduction\nk-Space\nImage Reconstruction\nFilters\nOutlook and Conclusion\n\n4. Filters\nIn your exercises, you have learned the process of filtering an image. In this section, we'll look at the relation between image and $k$-space with respect to applying a filter or multiplication operation on one of them. Feel free to use and\nlook at the code base from exercise 4 to get inspired.\nWe'll first decrease the resolution (sharpness) of the image by manipulating the image itself or the $k$-space. Then,\nwe're going to decrease the array size (sometimes also called resolution, this term is ambiguous) in 2 different ways.\n4. 1 Sinc Filter and Box Multiplication\n4.1.1 Sinc Filter Applied on Image\nThe (normalized) sinc function is defined as follows\n$$ \\mathrm{sinc} (x) := \\frac{\\mathrm{sin}(\\pi x)}{\\pi x} $$\nPlease implement a 2D sinc filter such that it is working as a filter in $x$- and a filter in $y$-direction independently.\nThis means that during filtering you can multiply a sinc in $x$ and a sinc in $y$ direction\n(as opposed to, e.g., using an absolute 2D distance from the point that the filter is applied to).\nAccording to exercise 4, implement\nthe class in SincFilter2d.java. This filter has two parameters: filterSize and downScale. Suppose\nwe have input $x$, with $x$ being an integer and $x \\in [-\\mathrm{filterSize}/2, \\mathrm{filterSize}/2)$,\nthe output is: $$\\mathrm{out} = \\mathrm{sinc}(x/\\mathrm{downScale})$$\npackage project;\n\nimport mt.LinearImageFilter;\nimport org.apache.commons.math3.analysis.function.Sinc;\n\npublic class SincFilter2d extends LinearImageFilter{\n\n public SincFilter2d(int filterSize, float downScale) {\n\n super(filterSize, filterSize, "Sinc2d (" + filterSize + ", " + downScale + ")");\n\n var s = new Sinc(true);\n\n /* your code here, get inspiration in exercise 4 if you don't remember */\n\n normalize();\n }\n}\n\nYou can evaluate the sinc function from the used library with the s.value(...input here...) method.\nYou should now apply the SincFilter2d filter to the complex MR image. But how? The filter is real-valued, and\nthe convolution operation with a certain filter is linear. Consequently, you can apply the real filter\nto the real and imaginary parts of your signal to be filtered separately.\nPlease implement a class LinearComplexImageFilter which does exactly that: applying a filter to real and imaginary parts separately. Your application of the filter will look like the following:\nSincFilter2d realFilter = new SincFilter2d(31, 4.0f);\nvar complexFilter = new LinearComplexImageFilter(realFilter);\nComplexImage filteredImage = complexFilter.apply(mrImage);\n\nPlease show the filtered MR image (magnitude is enough) and its corresponding $k$-space! Please describe the difference\nbetween the original and the filtered $k$-space. Look at both on a logarithmic scale, so that you can see the differences.\nMind that the intensity differences you can see on a logarithmic scale are huge, so if you use log10, a difference\nof 1 is actually a factor of 10 difference.\n4.1.2 Box Function Applied to $k$-Space\nSo far, you should have implemented a filter for the complex image and used the apply() method.\nThe effect on $k$-space by filtering the image is described by the so-called convolution theorem, but let's not dive\ninto theory here.\nNow let's do it the other way around: manipulate $k$-space data and observe how it changes the image. We're choosing a box multiplication,\nwhich instead performs point-wise multiplication between the input $k$-space matrix and a box function, as shown in Figure 4.1.\n\n \n\n\n Figure 4.1. Illustration of point-wise multiplication of a box function,\nwhich performs point-wise multiplication between the example k-space array (left) and the box function (middle).\n\nTo implement 2D box multiplication, we implement the setOuterToZero() method in ComplexImage.java.\nFor practical implementation, we're suggesting the method of setting certain lines to zero, then certain columns.\nYou will need to use the SetAtIndex() method for this.\npublic void setOuterToZero(int lines, int axis)\n\nHere, the parameter lines defines the zero-padding size of the box function (in Figure 4.1, this was set to 1),\nand the parameter axis defines on which axis the box filter is applied (0 is the "first" axis, $x$, and 1 is the\nsecond axis, $y$).\nWith this function, you can set everything but the center of the kSpace buffer to 0:\nkSpace.setOuterToZero(96,0); // kx-direction\nkSpace.setOuterToZero(96,1); // ky-direction\n\nFigure 4.2 shows a schematic of the use of the parameter lines= 96 as used in our code example.\nThe black "0"-areas in Figure 4.2 show where you should set $k$-space to 0.\n\n\n\n\nFigure 4.2. A schematic of $k$-space after running the setOuterToZero() method.\nShown here is the parameter lines = 96 in both dimensions as used in our code example, as well as the k-Space\nafter application of the method.\n\nApply the code shown above in your Project.java.\nPlease show the zeroed $k$-space (use the previously unfiltered $k$-space for zeroing) and its corresponding image!\nIs the image similar to the sinc-filtered image? If so, why?\nIn your Project report (2.3), you should:\n\n(2.3) Explain the properties of high-frequency and low-frequency components in $k$-space. What components are relevant for the image contrast? What about the image details?\n(2.3.1) Explain the effect of the 2D Sinc filter on the MR image and on $k$-space. What can be seen in $k$-space?\n(2.3.2) Explain the effect on the MR image when setting high-frequency parts of $k$-space to zero and compare the result with that of the 2D Sinc filter.\n(2.3.2) Play around with different values for lines. How large can you set this value without a visible loss in image quality? At what value can the heart no longer be recognized? Explain how this could be used for image compression.\n\n4.2 Reducing the Image Size\nWe'd like you to understand the conceptional difference between the "sharpness" of an image, which is determined by the\ninformation content it represents (visible in $k$-space, e.g.), and its array size, which confines the amount\nof information that can be represented.\n4.2.1 Cropping $k$-Space\nWhen the (array) size of $k$-space is reduced, so is the (array) size and resolution of the reconstructed image, as the\n(i)DFT / (i)FFT always connects 2 spaces of equal length.\nWe can perform this operation by cropping $k$-space to its center frequencies.\nFor this experiment, you will add another constructor to the ComplexImage class to extract a cropped $k$-space\nfrom the full acquired array.\nThis third constructor works only when the size of the cropped image is smaller than that of the original image.\n/*\n Params:\n width: Width of the cropped image\n height: Height of the cropped image\n name: Name of the cropped image\n bufferReal: Buffer of the real of the original image\n bufferImag: Buffer of the imaginary of the original image\n inputWidth: Width of the original image\n inputHeight: Height of the original image\n*/\npublic ComplexImage(int width, int height, String name, float[] bufferReal, float[] bufferImag, int inputWidth, int inputHeight)\n\nTo set the buffer of the cropped $k$-space from the center area of the original $k$-space, you need to implement a new\nmethod setBufferFromCenterArea() in the Image class in Image.java. This method can then be called by\nthe constructor.\npublic void setBufferFromCenterArea(int width, int height, float[] buffer, int inputWidth, int inputHeight)\n\nYou need to create two integer variables, offsetWidth and offsetHeight,\nto calculate the index where the original $k$-space is cropped. Once you find where to be cropped,\nset the value where you find to the cropped $k$-space using setAtIndex(). Figure 4.3 shows parameters in a geometrical way to better understand them.\n\n \n\n\n Figure 4.3. Visualization of suggested use of parameters. The blue-edged image is the original k-space, and the red-edged image is the cropped k-space.\n\nAs said above, when you are done with implementing the method setBufferFromCenterArea() in the Image class, utilize this\nmethod in the third constructor of the ComplexImage class to set the cropped $k$-space from the original.\nShow the cropped $k$-space as well as a reconstructed image from that $k$-space, see Figure 4.4.\n\n\n \n \n\n \n\n Figure 4.4. Cropped k-space (left) and MR Reconstructed image (right).\nSince the grid size of k-space gets small, the resolution of the reconstructed image decreases as well.\n\n4.2.2 Max Pooling\nThis subsection deals with another operation that can be used to create low-resolution images. Beware! This is usually not\nthe way of choice for decreasing image resolution if you want to maintain image information to be shown. However, this\nis a method often used (at this time, at least) in deep learning algorithms that decrease and increase the images they\nwork on for feature extraction.\nThe method is on of several pooling operations. In this case, max pooling, which is a pooling operation that extracts\nthe maximum value of patches (blocks) of an image (or feature map) and uses it to create a downsampled (pooled) image\n(or feature map). (source: max pooling explained)\n\n \n\n\n Figure 4.5. Illustration of the Max-Pooling operation we are implementing. The maximum value of each 2x2 block/patch\nis extracted. (source: Max-Pool)\n\nFigure 4.5 shows a small example of max pooling. Here, the image patch (block) has a width and height of 2,\ndefined as block_width and block_height, respectively. The algorithm:\n\n\nThe max-pooling operation extracts the maximum value of the red block, yielding 20;\n\n\nThis $2\\times2$ block then moves horizontally to the yellow block. The step length of this horizontal move is 2.\nThis parameter is defined as stride_width in the implementation;\n\n\nAfter looping over all blocks in the horizontal direction, the max pooling operation moves vertically and starts again\nin the left-most, which finds the purple block. The step length of this vertical moving is also 2 in this example and is defined as stride_height.\n\n\nLoop through all horizontal blocks in every vertical move until the end of the input matrix (feature map). In this example, the final block is the green one.\n\n\nTo help you implement max pooling, we provide its core structure. To begin with, you can copy the following code block and save it as MaxPooling2d.java:\npackage project;\n\npublic class MaxPooling2d {\n\n protected int block_width = 0;\n protected int block_height = 0;\n protected int stride_width = 0;\n protected int stride_height = 0;\n protected String name = "MaxPooling2d";\n\n public MaxPooling2d(int block_width, int block_height, int stride_width, int stride_height) {\n\n this.block_width = block_width;\n this.block_height = block_height;\n this.stride_width = stride_width;\n this.stride_height = stride_height;\n }\n\n public Image apply(Image input) {\n\n /* your code here */\n\n }\n}\n\nFurthermore, we provide a test file MaxPooling2dTests.java located in src/test/java/project to help you test your implementation. The test is commented out to avoid conflicts when running the program prior to this points, so you need to remove the comment symbols around the test function in MaxPooling2dTests.java. After implementation of MaxPooling2d.java\nrun the test and report whether you get the expected output:\n{{173, 173, 146},\n {173, 173, 146}}\n\nPlease explain what happens in the case of incomplete blocks in the boundary. For instance, change the pooling parameters in MaxPooling2dTests.java:\nMaxPooling2d mp = new MaxPooling2d(2, 2, 1, 2);\n\nPlease apply MaxPooling2d to the heart MR image and show the output image you get.\n// MaxPooling2d\nfloat[] mag = mrImage.getMagnitude();\nImage mrMagImage = new Image(mrImage.getWidth(), mrImage.getHeight(), "magnitude of mrImage");\nmrMagImage.setBuffer(mag);\n\nMaxPooling2d mp = new MaxPooling2d(4, 4, 4, 4);\n\nImage mrMagImage_MP = mp.apply(mrMagImage);\n\nYou should get something like this:\n\n\n \n \n\n \nFigure 4.5. Input and output images to the max-pooling operation. The input image is of size [256, 256], whereas the output image is of size [64, 64].\n\nIn your Project report (2.4), you should:\n\n(2.4.2) Explain how to improve the resolution of the MR image. What is the trade-off for that? Scan time? Cost?\n(2.4.2) Compare reconstructed images of cropping $k$-space and Max pooling in terms of image contents. Which result is closer to the original reconstructed image? Why do you think so?\n\nNext task: Outlook and Conclusion\n" } { "title": "Project Work 3 - Image Reconstruction", "url": "https://mt2-erlangen.github.io/fftshift/", "body": "Overview\n\nIntroduction\nk-Space\nImage Reconstruction\nFilters\nOutlook and Conclusion\n\n3. Image Reconstruction\nIn the last section, you made an acquired $k$-space Java-manipulatable. Now, we want to actually work with it. To reconstruct an MR image from it, we need to use an inverse Fourier Transform. The method for the Fourier Transform itself is provided by us, but you need to implement the workflow, which also involves shifting the buffer array, as explained below. By the end of the section, you will have implemented a framework to reconstruct an image from $k$-space and to calculate a $k$-space from an image.\n3.1 FFT and FFT Shift\n3.1.1 What is an FFT / iFFT?\nThe Discrete Fourier Transform is an extremely important tool in all engineering contexts. One of the reasons why it had such\ngreat success as an analysis tool and also as an MRI reconstruction tool is a very pragmatic one: it has an extremely fast and efficient\nimplementation algorithm: the Fast Fourier Transform (FFT).\nOne of the computational particularities of the FFT is that it uses a representation of the $k$-space where the so-called\nDC component - the value in $k$-space that refers to $k=0$ - is at the index 0 of the transformed array. In MRI, however,\nthe DC component is located at the center of the acquired data. So we first need to rearrange $k$-space in order for the FFT\nalgorithm to do its work.\nTo be precise, the DFT / FFT describes the measurement process of the MRI process (getting the spatial frequencies from the measured object).\nThe inverse operation of that, which you need to reconstruct the image from the spatial frequencies is the inverse DFT / iDFT\nor inverse FFT / iFFT. They forward and inverse Fourier Transforms only differ by a minus sign in front of one variable in their\ndefinition, but let's stick to proper wording.\n3.1.2 $k$-space and the FFT Shift\nAs stated above, MRI $k$-space is measured with its low-frequency components in the middle of the matrix.\nFor the sake of simplicity, let's first look at a 1D representation by looking at one line of the 2D matrix\nin the middle of the $k$-space.\n\n \n\n\n \n\n\n Figure 3.1. A magnitude image of k-space (top) in logarithmic scale, and the signal intensity along the\nred-line direction (bottom).\n\nFigure 3.1 shows signal intensities concentrate in the middle of the spectrum - around the DC component -\nas given by the nature of the MRI acquisition. From an implementation point of view, however,\nthe DC component should be shifted to the first index before applying an iFFT. Let's not go too deep into Fourier transform\ntheory or the specifics of the FFT algorithm here. Just keep in mind, (i)FFT wants the DC component at index 0, MRI measures\nthe DC component at index $N/2$.\nThe so-called FFT shift is a construct that is often used (not only in MRI). It simply shifts samples from one half of\nthe spectrum to the other half. Figure 3.2 shows an example of the 1D FFT shift. A full spectrum lies in an index range of $[0, N-1]$, where $N$ represents the vector length.\nSamples in a range of $[0, N/2-1]$ are then shifted to the other half spectrum of $[N/2, N-1]$ and vice versa.\n\n \n\n\n Figure 3.2. A graphical representation of the FFT shift. \n\n3.2 Apply FFT Shift to the 1D Case\nTo get a better understanding of the FFT shift, you will start in 1D and implement a new class ComplexSignal.\npackage project;\n\nimport mt.Signal;\nimport java.util.Objects;\n\npublic class ComplexSignal {\n protected mt.Signal real; //Image object to store real part\n protected mt.Signal imag; //Image object to store imaginary part\n protected String name; //Name of the image\n}\n\nCreate constructors and getters. Remember: class objects, real, imag, and name,\nmust be set in the constructor. Use the usual constructors for ComplexSignal, as shown below.\n(Side note: since the FFT only works for signal lengths of 2 to the power of $n \\in \\mathbb{N}$,\nour implementation restricts to those cases. This applies to the 2D case as well.)\npublic ComplexSignal(int length, String name)\npublic ComplexSignal(float[] signalReal, float[] signalImag, String name)\n\npublic float[] getReal() // get the buffer of the real\npublic float[] getImag() // get the buffer of the imag\npublic String getName()\npublic int getSize()\n\nGenerate a sawtooth-like wave (remember exercise 1), composed of five sine waves with different frequencies in a generateSine() method.\nFrequencies for five sine waves are\n$[\\text{numWaves}, 2 \\cdot \\text{numWaves}, \\cdots, 5 \\cdot \\text{numWaves}]$,\nand the number of samples is equal to the size of the ComplexSignal.\nSet the real part of the ComplexSignal as the constructed signal and the imaginary parts to zero.\nYou can use setAtIndex() to assign corresponding values to the real and imaginary parts.\npublic void generateSine(int numWaves)\n\nYou can plot your sinusoid wave using the given method DisplayUtils.showArray(). In this case, the signal length is 256.\n\n \n\n\n \n\n\n Figure 3.3. The real (top) and imaginary (bottom) parts of the sinusoidal wave are composed of five different sine waves.\n\nTo show the magnitude of the signal, you need to implement calculateMagnitude() and getMagnitude() for displaying with DisplayUtils.showArray(). You can use atIndex() and setAtIndex() for calculateMagnitude().\nprivate Signal calculateMagnitude()\npublic float[] getMagnitude()\n\n\n \n\n\n Figure 3.4. The magnitude of the summed-sinusoids signal.\n\nNow, apply an FFT to the signal using the given method FFT1D() from ProjectHelpers.java and plot the magnitude signal. The methods are commented out\nto avoid conflicts when running the program prior to this point. Remove the comment symbols for the methods related to ComplexSignal() in ProjectHelpers.java: FFT1D(), toComplex(), fromComplex(), and fft().\n\n \n\n\n Figure 3.5. The magnitude of the FFT of the signal. Since the complex sinusoid signal is composed of five different sine waves, there are five peaks at the low-frequency part.\n\nOnce you have created the FFT result, it is time to implement the FFT shift.\nIf you shift the FFT signal to the right by one sample, the rightmost signal shifts to the leftmost index: it's a cyclical shift.\nTake your time to understand this, referring to Figure 3.2. If you shift by $N/2$,\nthe left and right half of the signal are swapped with each other. In other words, you can implement the fftShift1d()\nmethod using a swap() method, which only swaps the left and right half of the array.\nYou will need to use setAtIndex() and AtIndex().\nAdditionally, as signals are complex numbers, you must consider both the real and imaginary parts.\npublic void fftShift1d()\nprivate Signal swap(Signal input)\n\nYou can plot the FFT shift result and play around, shifting the signal back and forth using\nfftShift1d() multiple times.\n\n \n\n\n \n\n\n Figure 3.6. Shown is the result of an FFT shift applied out once (top) and twice (bottom) to the FFT result. The figure at the bottom shows the same as Figure 3.5, meaning that if the FFT shift is applied twice, the signal comes back to the original position (this is valid for even length signals). This property is important when you reconstruct k-space. Moreover, the y-axes represent the magnitude of the FFT-shifted S and S' for plots above and below, respectively, where S and S' stand for FFT(s) and FFTshift(FFT(s)).\n\n3.3 Expand FFT shift to 2D in ComplexImage\nExpanding the concept of the FFT shift from the 1D case to the 2D case is not so complicated. It is the result\nof doing an FFT shift along the first dimension and then the second.\n\n \n\n\n Figure 3.7. Graphical example of the 2D FFT shift. One quadrant is swapped with another quadrant in the diagonal direction. This is due to the fact of swapping one sample along the x and y directions.\n\nYou need to consider that swapping one sample is carried out along both $x$- and $y$-directions in the 2D case, meaning that one quadrant is swapped with another in the diagonal direction. We move the working java script to the ComplexImage.java. You will add new methods called fftShift() and swapQuadrants()\npublic void fftShift()\nprivate Image swapQuadrants(Image input)\n\nIn fftShift(), use swapQuadrants() to swap samples and setBuffer(),\nwhich is a member method of the Image class, to set swapped samples to the buffer. Always consider that you are\ndealing with complex numbers, using both real and imag.\nYou can expand your implementation in the 1D case to the 2D case with swapQuadrants().\nDisplay the result of your 2D FFT shift.\n\n\n \n \n\n \n\n Figure 3.8. k-spaces before (left) and after (right) applying the FFT shift.\nLow-frequency components are shifted to the edge after the shift,\nand vice versa. To match k-space size to an integer-power of 2 for the FFT,\none dimension needed to be zero-padded, and such shows black strips (Not necessary in in our case so the implementation is optional).\n\n3.4 Reconstruct MR image\nNow, we are ready to reconstruct an MR image. The overview of the MR reconstruction process is depicted in Figure 3.9.\nOne key point here is that after applying an FFT shift to the $k$-space or the image once,\nyou have to apply the FFT shift one more time after applying the (i)FFT to bring it back to its original signal.\nPlay around with (i)FFTs and the shifts and you will see.\nInverseFFT2D() and FFT2D() methods are provided in ProjectHelpers.java.\n\n \n\n\n Figure 3.9. An overview of the MR reconstruction process.\n\nReconstruct the MR image from the measured $k$-space data.\nShow image magnitude, image phase, image real part, and image imaginary part as below.\n\n\n \n \n\n\n \n \n\n\n \n\n Figure 3.10. Reconstructed images. Image titles are presented at the left top corner of the each figure.\n\nThen, let's check if a forward FFT works fine with the reconstructed image. The original $k$-space should be reproduced from the FFT on the reconstructed image.\n\n\n \n \n\n \n\n Figure 3.11. Reproduced k-space from the reconstructed image. The reproduced k-space shows as the same as the original k-space.\n\nIn your Project report, you should:\n\nReconstruction (2.2): Describe reconstruction in your own words. Why is FFT-Shift necessary?\nFFT (2.2.1): Explain why an FFT shift needs to be carried out on $k$-space before and after the iFFT is applied.\nWhat is the purpose of the FFT shift? Where are low-frequency components located in $k$-space?\nWhat happens if you only apply the FFT shift before, but not after performing the iFFT on the $k$-space?\n(explain this with figures)\nInterpretation (2.2.2): Interpret the reconstruction results. Which image do radiologists view and diagnose among images of\nmagnitude, phase, real part, and imaginary part?\nCan $k$-space be reproduced from the reconstructed image like the original $k$-space?\nIf so, what is the procedure for that? Please explain the reasons why or why not.\n\nNext task: Filters\n" } { "title": "Project Work 2 - k-Space", "url": "https://mt2-erlangen.github.io/kspace/", "body": "Overview\n\nIntroduction\nk-Space\nImage Reconstruction\nFilters\nOutlook and Conclusion\n\n2. Complex Numbers and k-Space\nMeasured MRI signals are essentially radiofrequency waves summed over the imaged volume. Due to the nature of these waves\nand the underlying spin precession, the most convenient way of the mathematical formulation is the framework of complex numbers.\nComplex numbers are generally very well suited to describe the magnitude and phase of an oscillation/precession.\nIn this section, you will load a measured and provided MRI $k$-space into an array, using a provided function, and save it in an instance of a class for complex images, written by you. This class also provides calculation of magnitude and phase of the complex images from its real and imaginary parts.\n2.1 Complex Numbers\n2.1.1 A Single Complex Number by Itself\n\n \n\n\n Figure 2.1. A complex number can be visually represented as a pair of numbers (a, b) forming a vector\n in the so-called complex plane. Re stands for the real part, shown on the horizontal axis,\n Im stands for the imaginary part, shown on the vertical axis, and i is the \"imaginary unit\". (reference: complex numbers on wiki)\n\nAs illustrated in Figure 2.1, a complex number $z$ is defined as,\n$$ z = a + i \\cdot b $$\nHere, both $a$ and $b$ are real numbers. $i$ is the "imaginary unit", which is multiplied with the imaginary part of the complex\nnumber to denote the "2nd dimension" of the complex number. (Side note: For mathematical convenience of this powerful number framework,\nit satisfies $i^2 = -1$.) Likewise, you can think\nof the "real unit" as being $1$. One can also denote the complex number $z$ as an ordered pair,\n$$ z = (\\mathrm{Re}(z), \\mathrm{Im}(z)) $$\nwhere its real and imaginary part is $\\mathrm{Re}(z) = a$ and $\\mathrm{Im}(z) = b$, respectively. Noteworthy, the blue vector\nrepresentation of $z$ in Figure 2.1 can be characterized by\n\n\nthe length of the vector, i.e., the absolute value or magnitude, $$ r = |z| = \\sqrt{a^2 + b^2} $$\n\n\nits angle to the positive $\\mathrm{Re}$-axis, i.e., the argument or phase, $$\\varphi = \\mathrm{atan2}({b},{a})$$\n\n\nTherefore, using Euler notation, one can write the complex number $z$ in the form\n$$ z = |z| \\cdot e^{i\\varphi} $$\n2.1.2 The Relation to MR Images\nAt every location of the object that is imaged in the MR scanner, the signal that is measured is produced by precessing\n(rotating) spins. The precessing spins create a precessing magnetization that results in radiofrequency radiation and can\nbe measured with radiofrequency coils.\n\n \n\n\n Figure 2.2. The precessing magnetization M that produces the MR signal is detected from two orthogonal directions,\nthen demodulated using the multiplication of sinusoid or cosinusoid signals. They can be detected from two orthogonal\ndirections to fully capture the signal rotation. In reality, only one coil can be used and demodulated to capture the\nrotation. (Reference: Real vs. Imaginary Signals)\n\nIn consequence, the MR image can be described by complex numbers, i.e., a 2D plane of oscillations that are complex numbers. After digitization,\nwe can then treat MR images as 2D arrays of complex numbers, i.e., every element of the array is a complex number.\n2.1.3 MR Signals Are Measured in $k$-Space\nAs mentioned in Section 2.1.2, MR signals are generated by precession of magnetization. In the context of the measurement\nprocess, this happens after a radiofrequency pulse excitation of the measured volume, which causes the magnetization\nto precess.\nAs covered in the lecture, the sum of the total magnetization of the excited spins that are apparent in the measured volume\nis what matters for the measured signal. The magic of MRI is that the volume can be manipulated by external magnet field gradients\nin such a way that the resulting signal is a Fourier transform of the image to be measured. (Making MRI feasible by using\nthis method is a Nobel price idea!)\nThe Fourier transform of an image (space) represents the spatial frequencies and is usually called $k$-space.\nAs such, the result of the MR measurement process and subsequent demodulation is an array storing complex $k$-space values\n(also known as spatial frequencies of the MR images).\nTo reconstruct the MR images, the inverse Fourier transform is used to transform the signal from the frequency domain to the\nspatial/image domain. In other words, $k$-space is an intermediate step between MR scan and reconstructed image.\n(Not mandatory: If you are interested in the Fourier transform, we encourage you to look at this page: Discrete Fourier Transform.)\n2.2 The ComplexImage Class\nIn order to deal with 2D arrays of complex numbers in this project, we need to implement a new class, ComplexImage,\nin ComplexImage.java. \nAccordingly, we will implement the real part and the imaginary part of our complex image each into an Image object.\npackage project;\n\nimport mt.Image;\n\npublic class ComplexImage {\n protected mt.Image real; //Image object to store real part\n protected mt.Image imag; //Image object to store imaginary part\n protected String name; //Name of the image\n protected int width; \n protected int height;\n}\n\nCreate constructors and getters. Remember: class objects, real, imag, and name, and class variables, width and height, must be set in the constructor. As in the Image class, there will be two types of constructors in complexImage class:\npublic ComplexImage(int width, int height, String name)\npublic ComplexImage(int width, int height, String name, float[] bufferReal, float[] bufferImag)\n\npublic int getWidth()\npublic int getHeight()\npublic String getName()\n\nFor the project, $k$-space data is provided in the widely used HDF5 data format. You can read $k$-space data using the LoadKSpace()\nmethod in the given class ProjectHelpers.java\nComplexImage kSpace = ProjectHelpers.LoadKSpace("kdata.h5");\n\n2.3 Images of Magnitude and Phase of Complex Arrays\nWe learned in Section 2.1 that both MR images and their spatial frequencies, i.e., $k$-space, are complex arrays, and are made up of and usually stored as real and imaginary parts. Also, that complex numbers can be characterized by their magnitude and phase.\nOne very important aspect of MRI images is the following: while both real and imaginary parts, or both magnitude and phase,\nare needed to compute the image, the diagnostic information for the radiologist is mostly just visible in the magnitude image.\nTherefore, let's implement methods to compute magnitude and phase images from real and imaginary parts.\nImplement two methods calculateMagnitude() and calculatePhase() as methods of the ComplexImage class\nfor calculating magnitude and phase, respectively. Refer to the related equations above in subsection 2.1.1.\nprivate Image calculateMagnitude(boolean logFlag)\nprivate Image calculatePhase()\n\nWe are implementing a logFlag in the calculateMagnitude() method because we'd like to be able to indicate output of linear or logarithmic\nscale (log10). The magnitude of $k$-space has a huge image intensity range between the center (low-frequency part, very large)\nand the periphery (high-frequency part, very small). Taking a logarithm (log10) of the magnitude of $k$-space (point-wise) can reduce the huge image\nintensity range for better visualization of the magnitude of $k$-space.\nFor access to the magnitude and phase images, we use getters:\npublic float[] getMagnitude()\npublic float[] getLogMagnitude()\npublic float[] getPhase()\n\nEventually, You can show the magnitude and phase images using the given method DisplayUtils.showImage(). Please use this method in the Project.java\n\n\n \n \n\n \n\n Figure 2.3. Magnitude images of k-space without (left) and with (right) logarithmic scale. The left figure shows only one small dot in the middle due to a huge image intensity range, while the right figure displays the whole intensity range on a scale that is visible to the human eye.\n\n\n \n\n Figure 2.4. A phase image of k-space.\n\nIn the project report, you should\n\nShow the real and imaginary parts of the $k$-space\nShow the magnitude and phase images of the $k$-space\nExplain what those mean. Can you elaborate on why the phase does not show the same intensity variation as the magnitude?\n\nNext task: Image Reconstruction\n" } { "title": "Project Work 1 - Introduction", "url": "https://mt2-erlangen.github.io/introduction/", "body": "Overview\n\nIntroduction\nk-Space\nImage Reconstruction\nFilters\nOutlook and Conclusion\n\n0. Disclaimer\nAll the illustrations are done using an mri scan of a brain, however, you are required to reimplement the project using a scan of a knee:\n\n\n \n\n\nThe data is part of the Java template kdata_knee.h5. Please replace the figures in the report with results from your own implementation.\nFor general information and best practices have a look at our project report guidelines.\nWe also provide you with a basic java template including some useful helper functions, similar to what you saw during the exercises. You have to use this template as the starting point of your project. For more information on the installation have a look at our getting started guide.\nFurthermore, we provide a latex-template that you should use. It gives a more detailed structure for the report. Don't change the order and replace all images with images generated from your own implementation.\nIn case you are working on CIP machines you may run into quota issues due to large packages loaded by gradle. You can fix these issues with our guide.\nPlease also note that you can connect remotely to CIP machines using a remote SSH connection\n\n⚠ Warning: Exact Naming Is Required\nAll class names, method names, field names, and parameter names must match the specifications on this website exactly — including capitalization and spelling. Any deviation will result in point deductions.\nTo avoid typos, copy names directly from the website rather than typing them by hand.\nAdditionally, do not modify any code we provide. This includes the provided class skeletons, import statements, and class names. Only add your own implementation inside the designated areas.\n\n1. Introduction\nIn this semester's project work, you will learn some basic concepts of magnetic resonance imaging (MRI). The MRI scanner acquires data in the spatial frequency domain, known as k-space. MR image reconstruction requires the (inverse) Fourier transform of the acquired k-space data.\nYour first task is to write an introduction, which should include:\n\nWhat is MRI? Motivation an purpose of MRI.\nImage acquisition: Why is a strong magnetic field needed? What does the word "resonance" in MRI mean? Why is an antenna (receiver coil) needed? From a signal processing point of view, what is the relationship between the data acquired from an MRI machine and MR images?\nWhat are the advantages and disadvantages of MRI compared to other imaging modalities, like computer tomography (CT)? e.g. Does MRI require ionizing radiation? Does MRI provide better soft-tissue contrast? Is the acquisition speed of MRI as fast as CT? If not, why?\nGive a brief overview of the contents of the following tasks.\n\nUse references when necessary. Try to cite scientific publications (e.g. journal papers) in your introduction.\nYour introduction and conclusion should not contain any images.\nNext task: k-Space\n" } { "title": "Project Work 0 - Project Report", "url": "https://mt2-erlangen.github.io/checklist/", "body": "General Information\nThe deadline for the project report is on August 1st 2026.\nThe project report, as well as the coding, are individual work. As such, you need to submit them individually.\nNote: we'll check for plagiarism.\nDisclaimer\nThis is a short introduction with general information and best practices, as well as guidelines for your project-report. We highly suggest you read through the entirety of this Introduction in order to get a detailed impression of the project-work ahead of you. We have also prepared a video with all the necessary information about the report. \nYou have been provided with a basic java template containing a fully set up project, as well as the empty task-classes you will be working with.\nFor information on the initial Setup, have a look at our getting started guide.\nFurthermore, you have been provided a latex-template as a starting point for your report. \nIt already defines the structure your report should have, so please do not change the order of the report-sections. \nIn cases where the images to expect after completing a task were given in the exercise, you should obviously only use images you generated with your own code in your report\nIf you are working on CIP machines, you may run into quota issues. You can fix these issues with this short guide.\nPlease also note, that you can connect remotely to CIP machines using a remote SSH connection\nReport Guidelines\nThe project report can be written in either English or German. Please write between 4 and 7 pages of text, not counting the images.\nWe expect you to:\n\n\nUse the LaTeX template we provide\n\nLaTeX template link\nDo not modify the style or the formatting. No ornaments for page numbers!\nThe template defines the overall structure of your project. You have to fill in all the gaps.\nDo not change the order of the sections in our template.\nDo not change the titles of sections or subsections.\nDo not change the order of the figures in the project. You can optionally add new figures to the report.\nWe will only count answers that appear in the correct subsection of the report. If you want to avoid repeating yourself, use \\label{} and \\ref{}.\nThe template contains examples for all commands necessary for the report. It is allowed to import and use other packages if desired. \n\n\n\nUse scientific references in your explanations to clearly separate your work from the work of others:\n\nUse the bibliography (see template Bib/literatur.bib) and keep the citation style provided in the template.\nThe bibliography must be sorted (either alphabetically when using the Name/Year citation style or\nby the order you use them in the text when numbering the sources).\nDo not use more than two references that are websites only. \n\n\n\nAll symbols in equations need to be explained in the text.\n\n\nAll equations, figures and tables, if applicable, have to be numbered and referenced in the text.\n\nAll your figures should look professional.\nThey should not be blurry or hand-drawn and images should not have a window border of ImageJ.\nThey should not overlap with the text.\nAll figures need to have captions giving a brief description what the figure shows. The caption should be below the figure.\nLabel all axes in all plots and coordinate systems!\nA list of figures is not needed.\nReplace the images in the report template with images from your own implementation applied to knee data.\n\n\n\nDo not use abbreviations without introducing them. E.g., the first time you should write "Magnetic Resonance Imaging (MRI)".\nAfter that, "MRI" is enough.\n\n\nJust like in storytelling, connect the context of the project report, so everyone can see the flow.\n\n\nDo not use footnotes!\n\n\nCheck your spelling: there shouldn't be any obvious spelling errors that can be detected by a spell checker.\n\n\nTo obtain all the points for the content of your report, additional to the above\n\nCheck whether you have addressed all the questions in the task description.\nCheck whether you have provided all the result figures and a detailed explanation of them.\n\nGuidelines for the Use of Writing Assistants\nWe welcome students to use writing assistants to enhance the quality of the written report. However, we would like to point out that\nstudents are responsible for the correctness of the content, and that scientific references are mandatory to verify all the claims made in the report.\nIf you decide to use any writing assistant, we ask you to add the tool to the list of references.\nThe use of spell-checking and translation software is encouraged and can be done without adding them to the list of references.\nTo the Introduction\n" } , { "title": "Project Work 6 – Iterative Reconstruction and Conclusion", "url": "https://mt2-erlangen.github.io/archive/2020/reconstruction/", "body": "Iterative Reconstruction\nUsing backprojection, we could achieve a blurry reconstruction result.\nThe Filtered Backprojection algorithm solves this problem by applying a filtering step before backprojection.\nFor the project work, we will take a different approach.\nIn the last section, you measured the error between your reconstruction and the ground truth volume.\nHowever, this is only possible when doing a simulation and not when reconstructing an unknown real object.\nWhat we can do instead is meassuring the error in the projection domain by simply projecting the reconstruction!\nImplement the following method to use with our reconstructionProjector:\n // In mt/Projector.java\n public void reconstructIteratively(Image meassuredProjection, int sliceIdx, int numIterations)\n\nIt should\n\ncall projectSlice on volume to obtain a projection of our reconstruction\ncalculate an error image subtracting singogram.getSlice(sliceIdx) from meassuredProjection\nreplace the current slice of singogram by our error image\ncall backprojectSlice with the current sliceIdx\nrepeat all this for numIterations iterations\n\n\nSo we're now doing an reconstruction of the error sinogram and adding it to our blurry image.\nDoes this reduce our error?\nOur reconstruction algorithm is now finished. But it operates only on 2-d slices.\nCreate 3-d versions of projectSlice, backprojectSlice and reconstructIteratively:\n public void project()\n public void backproject()\n public void reconstructIteratively(Volume measuredProjections, int numIterations)\n\nAll they should do is calling their 2-d version for each slice.\nYou should now be able to reconstruct volumes.\nHint: You can use the following construct instead of a for-loop to enable multi-threaded calculation.\n // You have to replace `var` by `java.util.concurrent.atomic.AtomicInteger` when using Java 1.8\n var progress = new java.util.concurrent.atomic.AtomicInteger(0);\n IntStream.range(0, sinogram.depth()).parallel().forEach(z -> {\n System.out.println("Progess: " + (int) (progress.incrementAndGet() * 100.0 / (double) sinogram.depth()) + " %");\n //Do stuff here for slice z\n ...\n });\n\nProject Report\nFor the project, describe how your iterative reconstruction algorithm works. You should not mention implementation details\nlike variable or function names. Compare it with the Filtered Backprojection algorithm! It's not necessary to explain \nFiltered Backprojection Algorithm in detail. Just highlight the main difference.\nTest your reconstruction algorithm on a slice of a CT reconstruction of the Cancer Imaging Archive.\nMeasure the error of the reconstructed slices after each iteration (so call reconstructIteratively with numIterations == 1).\nInclude a figure showing this error in dependence of the iteration number in the project report.\nInclude images comparing ground truth, the backprojected slice and the result after a few iterations.\nComment on the error and the images in your text.\nDoes the result of the iterative reconstruction look better than solely using backprojection?\nThis part of the project report should be no longer than 1.5 pages.\nConclusion\nIn the last part, summarize want you have implemented and explained in your project report.\nReview the shortcommings of your simplified approach and how they could be mitigated in future.\nDraw a conclusion on your work!\nThis part of the project work should be about a quarter page long and should contain no images.\nSubmission\nSubmit your project report as a PDF and your entire project folder of your code until August 16 23:55h.\nYour project must compile as a whole!\nMake sure that you had a last look at our checklist.\nEvaluation\nWe hope you had a fun project work!\nYou can help us to improve the instructions for next year!\nPrevious section\n" }, { "title": "Project Work 5 – Backprojection", "url": "https://mt2-erlangen.github.io/archive/2020/backprojection/", "body": "Backprojection\nIf we have a look at the sinogram values corresponding to one detector position we get some information about the projected object.\nFor instance, we can see the profile of the projected circle in the following image.\n\nHowever, if we have no access to the original volume slice we can not tell anything about the distance of the object to the detector.\nAll the following situations would generate the same projection!\n\nSo apparently, we get some information in the direction of the detector plane, but all information orthogonal to the detector plane\nis lost.\nSo one thing that we can do if we want to perform a reconstruction from the sinogram is to take the information in direction of the detector plane\nand uniormly smear it into the direction orthogonal to the detector plane in a range where we assume the object is located.\nWe call this process backprojection.\n\n\n \n\n\n The backprojection smears the value of the projection uniformly over the paths of the rays\n\n\nUse the following method, that is calculating the value that we want to smear back.\n // in mt.Projector\n public float backprojectRay(mt.Image sinogramSlice, int angleIdx, float s) {\n sinogramSlice.setOrigin(0.f, -sinogram.physicalHeight * 0.5f);\n return sinogramSlice.interpolatedAt(angleIdx * sinogram.spacing, s) // * sinogram.spacing is necessary because spacing is not valid for our angle indices (actually each coordinate should have their own spacing. That's the revenge for us being lazy.).\n / (volume.physicalWidth() * Math.sqrt(2)) // we guess that this is the size of our object, diagonal of our slice\n / sinogramSlice.width() // we will backproject for each angle. We can take the mean of all angle position that we have here.\n ;\n }\n\nUse this method in backprojectSlice to backproject for each pixel x, y a horizontal line of the sinogram (all possible angles).\n // in mt.Projector\n public void backprojectSlice(int sliceIdx)\n // A helper method\n public void backprojectSlice(int sliceIdx, int angleIdx)\n\nTo do this \n\nCreate a loop over all angleIdx\n\nCall the helper method for all angle indices (there are sinogram.width angles)\n\n\nIn public void backprojectSlice(int sliceIdx, int angleIdx)\n\nGet the slice with index sliceIdx\nLoop over all x, y of this image\nCalculate the physical coordinates from the integers x and y (times spacing plus origin!)\nCalculate the actual angle theta from the angleIdx\nCalculate s from the physical coordinate.\n\ns is the physical distance of the point $\\vec{x}$ from the ray through the origin at angle theta.\nCan you write down the line equation for this line?\nCan you use the line equation to calculate the distance of $\\vec{x}$ an the line through the origin?\n\n\nCall backprojectRay with angleIdx and s\nAdd this result of backprojectRay to current value at position x, y and save the sum at that position\n\n\n\n\n\n\nReconstruction\nNext, we want to try out whether we can use our backprojection to reconstruct a volume.\nWhenever we want to test whether a method works, we need something to compare it with.\nThe best possible result, the "true" values, is usally called ground truth.\nWe can use one of the reconstructions that we downloaded from the Cancer Imaging Archive as a ground truth volume.\nThe best possible result for our reconstruction is to come as close as possible to the original (ground truth) volume.\nCreate a file src/main/java/project/GroundTruthReconstruction.java.\n// Your name <your idm>\npackage project;\n\nimport mt.Projector;\nimport mt.Volume;\n\nclass GroundTruthReconstruction {\n\n public static void main(String[] args) {\n (new ij.ImageJ()).exitWhenQuitting(true);\n\n }\n}\n\nIt's important that we never mix up the ground truth with the results of our algorithm.\nCreate therefore an instance of Projector that will have the task to simulate projections.\nYou can call it groundTruthProjector.\nOpen a test volume and create an empty (all pixels 0) sinogram. They are needed to call the constructor of Projector.\nCall groundTruthProjector.projectSlice with an arbiray slice index.\n\nCreate an empty volume (all pixels 0) with the same dimensions as the ground truth volume and a copy of groundTruthProjector.sinogram().\nYou can add the following method to mt.Volume to create copies.\n // in mt/Volume.java\n public Volume clone(String name) {\n Volume result = new Volume(width(), height(), depth(), name);\n IntStream.range(0, depth()).forEach(z-> result.getSlice(z).setBuffer(Arrays.copyOf(slices[z].buffer(), slices[z].buffer().length)));\n return result;\n }\n\nCreate a new projector reconstructionProjector with the empty volume and the copy of our sinogram.\nUse backprojectSlice(...) to create your first reconstruction of a slice.\nA good way to test your implementation is to incremently apply more and more backprojections on your reconstruction.\nWhen you calculated the sinogram for SLICE_IDX you can use\n// in project.GroundTruthReconstruction.java\n\n// Choose the slice in the middle. Hopefully showing something interesting.\nfinal int SLICE_IDX = ????; // < Use a index for which you already calculated `projectSlice`\n\nfor (int i = 0; i< projector.numAngles(); i++ ) {\n try {\n TimeUnit.MILLISECONDS.sleep(500);\n } catch (InterruptedException e) {\n e.printStackTrace();\n }\n projector.backprojectSlice(SLICE_IDX, i);\n projector.volume().getSlice(SLICE_IDX).show();\n\n //// Optionally save the intermediate results to a file:\n //DisplayUtils.saveImage(projector.volume().getSlice(SLICE_IDX), "/media/dos/shepp_9_"+i+".png");\n}\n\nThis will wait 500ms between each backprojection. Do your rays meet at the right points? Use a simple test image with\nonly a single white circle if not. This should help you debug the issue.\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n Backprojection using 9 views\n Backprojection using 100 views\n \n\nProject Report\nFor the project report, you should briefly describe your backprojection reconstruction algorithm.\n\nDescribe your implementation, create at least one figure supporting your explanations.\nYou should never mention implementation details like for-loops or variable names, but important parameters like the number\nof projection angles you used\nTest your reconstruction algorithm\n\nusing a simple test image like a white circle or square\nusing a CT reconstruction that you downloaded . Cite the data source!\n\n\nHow do images look like? If they are blurry, what is the reason for that.\nShow the images in your project report.\nMention in one sentence how the Filtered Backprojection algorithm tries to solve that problem.\nHow big are your errors in comparison to the ground truth? If you are using a measure like the Mean Squared Error give\na formula defining it.\n\nThe content for this section should be about one page long. \nPrevious section\nNext section\n" }, { "title": "Project Work 4 – Sinogram", "url": "https://mt2-erlangen.github.io/archive/2020/sinogram/", "body": "Sinogram\n\nNow, you should be able to generate sinograms from volume slice.\nGenerate two sinograms from two volume slices:\n\n\nOne sinogram from a simple test image. You can use for instance a white circle as I was doing in the last section.\n\n\nOne sinogram from a real CT reconstruction. You should cite the source of that image. The Cancer Imaging Archive even\nexplains you how to do that.\n\n\nShow both the volume slices and the sinograms.\nExplain to the reader what they are seeing. What is the radon transform?\nCan the radon transform be inverted?\n\n\nDo the sinograms contain some kind of symmetry? What is the reason for that?\nDo we really need a 360° degree scan?\n\n\n\n\nThis section should not be longer than one page.\nPrevious section\nNext section\n" }, { "title": "Project Work 3 – Projection", "url": "https://mt2-erlangen.github.io/archive/2020/projection/", "body": "Projections\nTo understand how we can reconstruct a volume from X-ray images, we will first go through the process of how these X-ray images\nwere acquired from a physical volume.\nIn your project report you should...\n\nexplain the reader the physical process of X-ray attenuation and its material dependance.\nWhat materials in the human body attenuate more X-rays than others?\nHow is this represented in a CT reconstruction? Or in other words: what quantity does a CT reconstruction actually show?\nWhich kind of tissues appear therefore lighter and which darker?\nexplain the fundamental theorem hat describes this process (X-ray attenuation). Give a formula!\nExplain all the symbols that you use in the formula.\nprove your explanations with references, also provide the source of the formula.\n\nIn this project work, we will make some simplifying assumptions on the acquisition geometry.\nI made a drawing of the path of a single X-ray through a slice of our volume.\nSince this ray crosses the origin of our coordinate system we call it the principal ray.\n\nWhat are the coordinates $\\vec{x}_{P}$ of a point $P$ on the line of the principal ray in dependency of the angle $\\theta$ ($\\alpha$ in drawing) and the distance\nfrom origin $r$?\nIn reality, not all X-rays cross the coordinate origin. \nWhat are the coordinates $\\vec{x}_{P'}$ of a point $P'$ that is on a ray that hits the detector at coordinate $s$ in depedency of $r$ and $\\theta$?\nWe assume parallel rays.\nHint: What vector do you have to add to $P$ to get to $P'$?\n\nUnfortunally, the figure was written on paper and you shouldn't use hand drawn figures in the project report (as you can see they look ugly).\nPlease create one or two plots on the computer that are explaining your derived the ray equations to the reader of the project\nreport. Decide which information is important for the reader to understand your text.\n\nHow does the described situation differ from the actual acquisition geometry of modern CT scanners?\nWhat are the reasons for that? Could our simplified situation be implemented in reality?\n\n\n\nAfter Implementation: Describe briefly your implementation of the projection.\nDo not refer any Java classes or variable names!\nGive a formula for how you calculated the different projection angles.\nGive a formula for how you calculated the projection result for each ray.\nWhat physical effects were neglected in our simulation but are present in reality?\nName at least three non-idealities of real systems.\n\nThis part of the project work should be not longer than 1.5 pages.\nAfter some remarks from you: 2 pages are also ok..\nImplementation\nWe already have an volume class which can store the stack of image slices. Additionally, we also want\nto store the projection images (referred as sinograms) for these stack of image slices. For this create\ncreate a class mt.Projector in a file src/main/java/mt/Projector.java, which can hold both volume slices\nand the sinograms.\n// Your name here <your idm>\npackage mt;\n\nimport java.util.stream.IntStream;\n\npublic class Projector {\n // Our volume\n private mt.Volume volume;\n // Our sinogram\n private mt.Volume sinogram;\n\n}\n\nImlement a constructor for this class.\nIt should call this.volume.centerOrigin() and set the origin of each sinogram slice to 0.0f, -sinogram.physicalHeight() * 0.5f so we use the same coordinate\nsystems as in our drawings (it might be handy to set the origin of sinogram to 0.0f, -sinogram.physicalHeight() * 0.5f, -sinogram.physicalDepth() * 0.5f, requires a Volume.setOrigin method)\n public Projector(mt.Volume projectionVolume, mt.Volume sinogram) {\n ... // Implementation here\n assert sinogram.depth() == volume.depth() : "Should have same amount of slices";\n }\n\nConstructor and Setters/Getters:\n public void setSinogram(Volume sinogram)\n public Volume sinogram()\n\n public void setVolume(Volume volume)\n public Volume volume()\n\n public int numAngles() // == sinogram.width()\n\nWe assume that we aquire $N$ projections at $N$ different angles $\\theta$.\nAll angles should have the same distance from each other and divide $2\\cdot \\pi$ in $N$ equal parts (we always use radians for angles).\nImplement a method which computes angle value of $n^{th}$ angle index. We want to use the method such that at $n=0$ our angle value should return $\\theta=0$, at $n=1$ returns $\\theta= \\frac{2\\cdot \\pi}{N}$, and so on. Think of a general formula to compute the $n^{th}$ angle and describe it briefly in the description of your implmentation.\nUse this formula to implement the following method:\n // In mt.Projector\n public float getNthAngle(int angleIdx)\n\nNow, recall the formula you derived for the position of point $P'$ in the previous section.\nWe could directly use those coordinates $\\vec{x}$ to calculate the integral in Lambert-Beer's law for a ray with angle $\\theta$ and shift $s$ over a slice $\\mu$ on our computers:\n$$ I_{\\textrm{mono}} = I_{0} \\cdot \\exp\\left(-\\intop\\mu\\left(\\vec{x}\\right)\\textrm{d}\\vec{x}\\right) = I_{0} \\cdot \\exp\\left(-\\intop_{-R}^{R}\\mu\\left(r,\\theta, s\\right)\\textrm{d}r\\right)$$\n$R$ is the radius of the circle circumscribing our rectangular slice. You can see it in the drawing.\nThe path integral goes along the path marked in yellow in the drawings.\nWe are only interested in the value of the line integral\n$$ P(\\theta, s) = \\intop_{-R}^{R}\\mu\\left(r, s, \\theta\\right)\\textrm{d}r $$\nand we have to replace the integral by a sum (computers cannot calculate integrals directly)\n$$ P(\\theta, s) = \\sum_{r=-R}^{R}\\mu\\left(r,\\theta, s\\right) \\cdot \\mathtt{spacing}$$\nCalculate this sum for a fixed $s$ and $\\theta$ on a slice of our volume!\nYou can use volumeSlice.interpolatedAt(x,y) to deterime $\\mu(\\vec{x})$ and access values of our slice.\n // in mt.Projector\n public float projectRay(mt.Image volumeSlice, float s, float theta)\n\nWe have now calculated one value of one of the gray rays on our slice which translates to one point in our sinogram.\n\nNext we want to call this function for every ray and every pixel of our sinogram in the following method:\n // in mt.Projector\n public void projectSlice(int sliceIdx) {\n\nTo do that ...\n\n\nGet the slice sliceIdx from this.volume using getSlice\n\nThis is a slice of our volume with coordinates $x$ and $y$.\n$x$ runs from left to right\n$y$ runs from top to bottom\n\n\n\nGet the sinogram for that slice sliceIdx from this.sinogramm using getSlice\n\nThis is a slice of our sinogram with physical coordinates $s$ and $\\theta$.\n$\\theta$ runs from left to right\n$s$ runs from top to bottom\n\n\n\nIterate over each pixel of the sinogram. I would use angleIdx, sIndex as a loop variables.\n\nCalculate the actual value of s from sIndex.\nCalculate theata from angleIndex by calling the function getNthAngle\nCall projectRay with s and theta\nSave the result to sinogram at positions angleIndex and sIndex\n\n\n\nHint Computing s from sIndex is just using the physical coordinates and shifting the origin of $s$ axis in\nthe sinogram to the center.\nThis can be done by muliplying sIndex with sinogram.spacing() (pixel size of the detector) and adding\nsinogram.origin()[1] (== -sinogram.physicalHeight() * 0.5f).\nWe recommend you to test your algorithm using a simple image.\nChoose a good size for the sinogram to capture the whole image (e.g. height == volume.height).\nFor simplicity, you do not need to change the spacing of the volume or the sinogram.\n\n \n \n \n\n \n Simple test slice\n Sinogram of that slice\n\n\nI used a high number of 500 angles to get a near square image.\nWhen you are using less angles the width of your sinogram will be smaller.\nUse less angles to compute the results faster.\nYou may also apply projectSlice on all slices and display the sinogram.\nCtrl+Shift+H should reveal a rotating torso when using one the Cancer Archive scans:\n\n \n \nOr of the test image above\n\n \n \nPrevious section\nNext section\n" }, { "title": "Project Work 2 – Volumes", "url": "https://mt2-erlangen.github.io/archive/2020/volume/", "body": "Getting started\nImportant: You have to work alone on your project work. No team partners allowed anymore 😔!\nCT reconstruction treats the problem of recovering a three-dimensional volume from a set of X-ray images.\nSo we will need two classes that represent our volume and our stack of X-ray projections.\nIt turns out that we can interpret our projections and our volume just as a list of 2-d images.\n\n\n\n\n\nA volume: very much just multiple images stacked one over another\n\n\nCreate a class mt.Volume\n// Your name <your idm>\n// No team partner... So sad 😢!\n\npackage mt;\n\nimport java.util.Arrays;\nimport java.util.stream.IntStream;\n\npublic class Volume {\n // Here we store our images\n protected mt.Image[] slices;\n\n // Dimensions of our volume\n protected int width, height, depth;\n\n // Spacing and origin like for mt.Image\n protected float spacing = 1.f; // spacing is now our voxel size\n protected float[] origin = new float[]{0, 0, 0}; // position of the top-left-bottom corner\n\n // A name for the volume\n protected String name;\n\n}\n\nCreate a constructor. Remember: width, height, depth, name must be set and slices must be created as an array.\nWe need depth images of size width $\\times$ height for the slices.\n public Volume(int width, int height, int depth, String name)\n\nGetters/setters...\n public int width()\n public int height()\n public int depth()\n public float physicalWidth() // width * spacing()\n public float physicalHeight() // height * spacing()\n public float physicalDepth() // depth * spacing()\n\n public mt.Image getSlice(int z) \n public void setSlice(int z, mt.Image slice)\n\n public float spacing()\n public void setSpacing(float spacing) // should also set spacing also for all slices!\n public String name()\n public float[] origin()\n\n // should set origin to (-0.5 physicalWidth, -0.5 physicalHeight, -0.5 physicalDepth) and call centerOrigin on each slice\n public void centerOrigin()\n\nNow comes the interesting part: visualize the volume!\nYou will need to update src/main/java/lme/DisplayUtils.java file and use the following command to visualize the volume.\n public void show() {\n lme.DisplayUtils.showVolume(this);\n }\n\nYou can download a volume from the Cancer Imaging Archive.\nUse one of the following links (it does not matter which CT volume you use).\n\nVolume 1\nVolume 2\nVolume 3\n\nUnzip the folder and drag the whole folder onto a running ImageJ, e.g. by the following code snippet in a file src/main/java/project/Playground.java.\n(if you have problems unzipping the files you might try the official downloader from the website. You need their downloader to open the *.tcia files).\n// This file is only for you to experiment. We will not correct it.\n\npackage project;\n\nimport mt.Volume;\n\nclass Playground {\n\n public static void main(String[] args) {\n // Starts ImageJ\n (new ij.ImageJ()).exitWhenQuitting(true);\n\n // You can now use drag & drop to convert the downloaded folder into a *.tif file\n \n }\n\n}\n\n\nSave it the opened DICOM as a *.tif file (File > Save As > Tiff...).\nThere are more smaller test volumes on studOn.\n\n \n \n\n\n\nOpen the saved tiff file in the main of a file src/main/java/project/Playground.java:\n// This file is only for you to experiment. We will not correct it.\n\npackage project;\n\nimport mt.Volume;\n\nclass Playground {\n\n public static void main(String[] args) {\n (new ij.ImageJ()).exitWhenQuitting(true);\n \n Volume groundTruth = DisplayUtils.openVolume("path/to/file.tif");\n groundTruth.show();\n \n }\n\n}\n\n\nYou can now scroll through the different slices.\nvia GIPHY\nHere a short summary of handy functions of ImageJ when working with CT images.\n\nCtrl+Shift+C: Brightness and Contrast\nCtrl+Shift+H: Orthogonal Views (view volume from three sides)\nAfter selecting a line: Ctrl+K Line Plot\nCtrl+I: Get patient information of a DICOM\nLook at a 3-d rendering with ClearVolume\n\nPrevious: Introduction \nNext: Forward Projection\n" }, { "title": "Project Work 1 – Introduction", "url": "https://mt2-erlangen.github.io/archive/2020/introduction/", "body": "Contents\n\nIntroduction Tafelübung 9. Juni\nVolumes\nProjection Tafelübung 16. Juni\nSinogram\nBackprojection and Reconstruction Tafelübung 23. Juni\nIterative Reconstruction and Conclusion\n\nIntroduction\nDuring this semester we will learn how computer tomography (CT) reconstruction algorithms work.\nYour first task is to find out more about CT and write an introduction for your project report.\n\nFind an informative title for your project report. "Project Report" and "Introduction" are not good titles.\nWhat is computer tomography?\nWhat is the problem it tries to solve? When and how was it first introduced?\nWhat kind of electromagnetic radition is used to aquire the images?\nHow did modern CT devices improve over their predecessors? What is the typical spatial resolution of a state-of-the-art CT scanner?\nWhat are advantages and disadvantages of CT in comparison with other modalities. Include at least two advatages and\ntwo disadvantages.\nGive a short overview of the contents of the following sections of your project report.\nProof all your statements with references. You should use at least four distinct sources in your introduction that are\nnot webpages.\n\nThe introduction should not be longer than one page and but at least half a page. \nYour introduction and conclusion should not contain any images.\nPlease have a look on our checklist for a good project report.\n\n\nNext task\n" }, { "title": "Exercise 6", "url": "https://mt2-erlangen.github.io/archive/2020/exercise-6/", "body": "Submission deadline: 29.06.20 23:55h\nIn the last exercise, we want to have a look at edge detection and segmentation.\nEdge Detection\n 7 Points\nOpen a test image in a new file src/main/java/exercise/Exercise06.java.\n// Your name\n// Team parnter name\npackage exercises;\n\nimport lme.DisplayUtils;\nimport mt.LinearImageFilter;\n\npublic class Exercise06 {\n public static void main(String[] args) {\n\t(new ij.ImageJ()).exitWhenQuitting(true);\n\tmt.Image cells = lme.DisplayUtils.openImageFromInternet("https://upload.wikimedia.org/wikipedia/commons/8/86/Emphysema_H_and_E.jpg", ".jpg");\n\n }\n}\n\nWe will use the Sobel Filter, to estimate the gradient of the image.\nThe Sobel Filter uses two filter kernels. One to estimate the x-component of the gradient and one for the y-component.\n\nCreate two LinearImageFilters with those coeffients. You can use filterX.setBuffer(new float[]{...})\nor setAtIndex to do that.\nFilter the original image with both of them!\n\n \n\t\n\t\n \n \n\tX component of gradient $\\delta_x$\n\tY component of gradient $\\delta_y$\n \n\nYou should now have two intermediate results that can be interpreted as the x-component $\\delta_x$\nand y-component $\\delta_y$of the estimated gradient for each pixel.\nUse those two images to calculate the norm of the gradient for each pixel!\n$$ \\left|\\left| \\nabla I \\right|\\right| =\\left|\\left| \\left(\\delta_x,\\ \\delta_x \\right) \\right|\\right| = \\sqrt{ \\delta_x^2 + \\delta_y^2}$$\n\nFind a good threshold and set all gradient magnitude values to zero that are below this values and all other to 1.f to\nobtain an image like this with a clear segmentation in edge pixels and non-edge pixels.\n\nSegmentation\n 3 Points\n\n Source: https://commons.wikimedia.org/wiki/File:Emphysema_H_and_E.jpg (cc-by-2.0)\nFor histologic examinations colored subtances called stains are used to enhance the constrast\nof different portions of the tissue.\nUse a suitable threshold to segment the individual sites with high contrast (0 background, 1 contrasted cells).\nYou can use the following method to overlay your segmentation with the original image.\n // In lme.DisplayUtils\n public static void showSegmentedCells(mt.Image original, mt.Image segmented) \n // You may also try `showSegmentedCells(cells, segmentation, true);` with the newest version of DisplayUtils\n\n\nImproving your Segmentation\nThis is optional and not required for the exercise.\nYou might want to go directly to the evaluation of this year's exercises:\nhttps://forms.gle/2pbmuWtmeTtaVcKL7\nYou may notice that by just choosing a threshold you may not be able to separate each individual structure.\n\nYou can try out some operations from the menu Process > Binary while you have your 0/1 segmentation focused.\nYou have to convert to 8-bit first. E.g.\n\n\n\nImage > Type > 8-bit\nProcess > Binary > Watershed\n\n\nOr "click" on menu items in your program code.\n segmentation.show();\n IJ.run("8-bit");\n IJ.run("Watershed");\n DisplayUtils.showSegmentedCells(cells, segmentation);\n\n\nEvaluation\nWe redesigned the exercises from scratch for this semester.\nTherefore, some of the exercises might have been difficult to understand or too much work. \nWe are glad for your feedback to help future semesters' students😊:\nhttps://forms.gle/2pbmuWtmeTtaVcKL7\n\n\n\n\n\n" }, { "title": "Exercise 5", "url": "https://mt2-erlangen.github.io/archive/2020/exercise-5/", "body": "Submission\nSubmission deadline: 08.06.20 23:55h\nPlease ensure that all files you created also contain your name and your IDM ID and also your partner's name and IDM ID if you're not working alone.\nEach exercise has 10 points. You have to achieve 30 of 60 points in six homework exercises to pass the module.\nQuanitfying Errors\n3 Points\nIn Exercise03, we have seen that we can use linear low-pass filters, like the Gauss filter, to reduce \nthe amount of noise in images. Let's test that!\nAdd two static methods to the Image class:\npublic static float meanSquaredError(Image a, Image b);\npublic static float psnr(Image a, Image b, float maxValue); // maxValue is 255 for PNG images\n\n\n$$ \\mathrm{MSE}_{ab}= \\frac{1}{M} \\sum _{i=0}^{M} \\left(a_i - b_i\\right)^2 $$\n$$ \\mathrm{PSNR_{ab}} = 20\\cdot \\log_{10}(\\mathtt{maxPossibleValue}) - 10\\cdot \\log_{10}(\\mathrm{MSE}_{ab}) $$\n\nStatic also means that you will use them like float mse = Image.meanSquaredError(imageA, imageB);.\nOpen a test image and add some noise using addNoise in exercise.Exercise05 (src/main/java/exercise/Exercise05).\n (new ij.ImageJ()).exitWhenQuitting(true);\n Image original = lme.DisplayUtils.openImageFromInternet("https://mt2-erlangen.github.io/shepp_logan.png", ".png");\n original.setName("Original");\n \n Image noise = new Image(original.width(), original.height(), "Noise");\n noise.addNoise(0.f, 10.f);\n\n Image noisyImage = original.minus(noise); // You might also implement your own `plus` ;-)\n\nApply a Gauss filter (choose a good filterSize and sigma) on the noise image and compare the result with the original image.\nCan the error be reduced in comparision to the unfiltered noisy image? Also take a look on the error images that you can\ncalculate using your minus method of the class Image.\n\nHint: You can use a for-loop to try out different values for sigma.\nHint: You do not need to submit written answers to the questions in the text. Just do the correponding experiments!\n\nNon-Linear Filters\n3 Points\nA quality criterion for medical images are sharp edges.\nHowever, though the Gauss filter reduces the noise it also blurs out those edges.\nIn this exercise, we try to mitigate that problem using non-linear filters.\nNon-linear filters calculate similar to a convolution each pixel value in the output from a neighborhood of the\ninput image. Remember the sliding window from exercise 3? Non-linear filters do exactly the same.\n\nSource: https://github.com/vdumoulin/conv_arithmetic\nCreate a class mt.NonLinearFilter in the file src/main/java/mt/NonLinearFilter.java:\n// Your name here <your idm>\n// Your team partner here <partner's idm>\npackage mt;\n\nimport lme.WeightingFunction2d;\nimport lme.NeighborhoodReductionFunction;\n\npublic class NonLinearFilter implements ImageFilter {\n\n // Name of the filter\n protected String name; \n // Size of the neighborhood, 3 would mean a 3x3 neighborhood\n protected int filterSize;\n // Calculates a weight for each neighbor\n protected WeightingFunction2d weightingFunction = (centerValue,neighborValue,x,y) -> 1.f;\n // Calculates output value from neighbors and weights\n protected lme.NeighborhoodReductionFunction reductionFunction;\n\n public NonLinearFilter(String name, int filterSize) {\n this.filterSize = filterSize;\n this.name = name;\n }\n\n @Override\n public String name() {\n return name;\n }\n}\n\nAs you can see, NonLinearFilter uses two interfaces. You can copy them into your src/main/java/lme/ folder.\n// in file `src/main/java/lme/WeightingFunction2d.java`\npackage lme;\n\n@FunctionalInterface // Does nothing. But Eclipse is happier when it's there.\npublic interface WeightingFunction2d {\n // Assigns a neighbor (shiftX, shiftY) a weight depending on its value and the value of the pixel in the middle of the neighborhood\n float getWeight(float centerValue, float neighborValue, int shiftX, int shiftY);\n}\n\nand\n// in file `src/main/java/lme/NeighborhoodReductionFunction.java`\npackage lme;\n\n@FunctionalInterface\npublic interface NeighborhoodReductionFunction {\n // Calculates the output pixels from the values of the neighborhood pixels and their weight\n float reduce(float[] values, float[] weights);\n}\n\nImplement the method apply for NonLinearFilter.\n @Override\n public void apply(Image input, Image result)\n\nThe method should calculate each output pixel from a neighborhood. So\n\nCreate an array to hold the values of the neighborhood pixels. How many neighborhood pixels are there?\nLoop over each output pixel\n\nFill the array of neighborhood pixels with values from the input image (needs two inner loops)\nUse this.reductionFunction.reduce to determine the value of the output pixel. You can use null for the second parameter for now (we will implement weights later).\nSave the value to the output image (using setAtIndex).\n\n\n\nOverall, the method should look very similar to your LinearImageFilter.apply method.\nTo test your method, implement a MedianFilter in a file src/main/mt/MedianFilter.java as a subclass of NonLinearFilter.\n// Your name here\n// Team partner's name here\npackage mt;\n\nimport java.util.Arrays;\n\npublic class MedianFilter extends NonLinearFilter {\n\tpublic MedianFilter(int filterSize) {\n // TODO:\n super(...);\n reductionFunction = ...;\n\t}\n}\n\nThe MedianFilter is a LinearImageFilter with\nreductionFunction (values, weights) -> { Arrays.sort(values); return values[values.length / 2]; }\n(it sorts the values and takes the one in the middle).\nAll you need to do is to call the super constructor and set reductionFunction.\nDoes the median filter also reduce the noise in the image?\nBilateral Filter\n2 Points\nNext, we will implement the BilateralFilter.\npackage mt;\n\npublic class BilateralFilter extends NonLinearFilter {\n GaussFilter2d gaussFilter;\n\n public BilateralFilter(int filterSize, float spatialSigma, float valueSigma){\n ...\n }\n}\n\nThe bilateral assign a weight to each neightborhood pixel.\nSo modify your NonLinearFilter.apply method that it also creates a weights array and uses weightingFunction.getWeight to\nfill it. reductionFunction should now also be called with the weights array.\nThe bilateral has to parameters $\\sigma_{\\text{value}}$ and $\\sigma_{\\text{spatial}}$.\nFor large values of $\\sigma_{\\text{spatial}}$ the bilateral filter behaves like a Gauss filter.\nInitialize gaussFilter in the constructor. Set weightingFunction so that the weights $w_s$ of the Gauss filter are returned.\nSet reductionFunction. It should multiply each of the values with its weight and then sum the results up.\nYour BilateralFilter should now behave like a Gauss filter. Does it pass the test in GaussFilter2dTests when you\nuse BilateralFilter instead of GaussFilter2d?\nEdge-Preserving Filtering\n2 Points\nTo make our bilateral filter edge preserving, we have to use also $\\sigma_{\\text{value}}$.\nThe value weight $w_v$ is calculated as follows\n$$ w_v = \\exp\\left(-\\frac{\\left(\\mathtt{centerValue}-\\mathtt{value}\\right)^2}{2 \\sigma_{\\text{value}}^2}\\right) $$\nJust multiply with this value $w_v$ in weightingFunction. The total weight of a pixel will then be $w_v \\cdot w_s$.\nNow we have the problem that our weights will no longer add up to one! To solve this problem divide by the sum of weights\nin the reductionFunction.\nCan you reduce the error even more using the bilateral filter? My results look like this.\n\n \n \n \n \n \n \n \n Original\n Noisy\n Gauss filtered\n Bilateral filtered\n \n \n \n \n \n \n \n \n \n Error Unfiltered\n Error Gauss\n Error Bilateral\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n" }, { "title": "Exercise 4", "url": "https://mt2-erlangen.github.io/archive/2020/exercise-4/", "body": "Submission\nSubmission deadline: 01.06.20 23:55h\nPlease ensure that all files you created also contain your name and your IDM ID and also your partner's name and IDM ID if you're not working alone.\nEach exercise has 10 points. You have to achieve 30 of 60 points in six homework exercises to pass the module.\nImage Transformations\nIn the previous exercises, we built a Signal and Image class for performing basic operations on the \ninput data. We also implemented various filters to process the data and remove noise. \nIn this exercise we will build on top of the image class and implement methods for performing image transformations.\nIn many medical applications there is a need to align two images so that we \ncan combine the information between the images. This can be due to the images coming from \ndifferent modalities like (CT and MRI) or in scenarios were you have an patient data at from \ndifferent time (before and after an surgery) and you want to compare between these two images. \nIn all these scenarios we use image registration bring the different images together.\nIn the below image, two x-ray views (1) and (2) are fused together to obtain the combined view(3)\nwhich produces more information for diagnosis. This is achieved using image registration between view(1) and view\n \nImage Source: Hawkes, David J., et al. "Registration and display of the combined bone scan and \nradiograph in the diagnosis and management of wrist injuries." European journal of nuclear medicine \n18.9 (1991): 752-756. \nOne of the crucial components of image registration is image transformations.\nIn this exercise we will implement basic image transformations. Additionally, we need to implement an \ninterpolation method to find out the image intensity values at the transformed coordinates. \n\nOverview of tasks\n\nWe will implement the following tasks for this exercise.\n\nHelper functions (a. Image origin, b. Interpolation)\nImage Transformation (a. Translation, b. Rotation, c. Scaling)\n\nWe introduce the basic theory about image transformations in theoretical background section.\nPlease read the theory before proceeding since we don't re-introduce everything in the task description. \nTask Description\n\nWe provide the main method for the task with an interactive ImageJ plug-in in the files\nsrc/main/java/exercises/Exercise04.java\nand src/main/java/mt/ImageTransformer.java\n\n0. Getting started\n1 Point\n\n\nFor Exercise 4 we provide a GUI that displays the image with different image transformation options.\n\n\n\nOnce you have all the transformations implemented you should be able to adjust the sliders and perform the desired transformations in an interactive manner.\n\n\nThe transformations requires an origin point about which we perform all the transformation.\n\n\nExtend the Image class with these three methods\n\n\n // store the origin points x,y as \n // a class variable\n public void setOrigin(float x, float y)\n\n // the origin() returns the {x,y} as float \n // array from the stored origin class variable. \n public float[] origin()\n\n // Sets the origin to the center of the image\n public void centerOrigin()\n\n\nTo ensure that everything is running, run the main function.\nWe already set the origin point for you in the file src/main/java/exercises/Exercise04.java\nTo ensure that everything is running, run the main function of Exercise04.\n\n1. Image interpolation\n4 Points\n\n\nSince the image transformations heavily relies on the interpolation, we first implement the interpolation method by extending the Image class with the following method:\n\n\npublic float interpolatedAt(float x, float y) \n\n\nThe method takes in a physical $(x,y)$ coordinate and returns the image intensity at that position.\nWe use bilinear interpolation to find the value at $(x,y)$ (described in the theory).\n\n\nWe can rewrite the interpolation equation using the linear interpolation formula when we want to interpolate between two points $x_1,x_2$ with function value $f(x_1),f(x_2)$ to find out the function value $f(x)$ at $x$.\n\n\n$$ \\frac{f(x) - f(x_1)}{x-x_1} = \\frac{f(x_2) - f(x_1)}{x_2 - x_1} $$\n\n\n\n\nSince we already know the difference $x_2 - x_1$ is either 1.0 if we have a pixel spacing of 1.0 or pixel spacing, we can simplify the above equation as follows:\n\n$$f(x) = f(x_1) + (x-x_1) (f(x_2) - f(x_1))$$\n\n\n\nYou can use the function below to compute linear interpolation between two points $x_1,x_2$ at $x$\n\n // Definition of arguments\n // diff_x_x1 = x - x_1 compute the difference between point x and x_1\n // fx_1 = f(x_1), pixel value at point x_1\n // fx_2 = f(x_2), pixel value at point x_2 \n\n float linearInterpolation(float fx_1, float fx_2, float diff_x_x1) {\n return fx_1 + diff_x_x1 * (fx_1 - fx_2);\n }\n \n\n\n\nWe now have an way to interpolate between two points in 1D. We need to extend this to 2D case such that we can use \nit for interpolating values in our image. An illustration of how this can be done is \nalready given in the theory section.\n\n\nImplementation detail We describe here possible way to implement the interpolation scheme.\n\n\nFind the 4 nearest pixel indices, for the given physical coordinate $(x,y)$. To do, this you have to transform\nthe physical coordinate to the index space of the image.\n\n\nHint: In physical space all the values of $x$ and $y$\nare computed from origin. So we just need to subtract the origin from the coordinates for this correction.\nx -= origin[0]\ny -= origin[1]\n\n\n\nPixel spacing also alters the physical coordinates and needs to be corrected for. \nThis can be done using just by dividing each coordinate by the pixel spacing.\nx /= spacing;\ny /= spacing\n\n\n\nHint: Since each pixel is a unit square you can round up and down each coordinate ($x$ and $y$) separately \nto get the 4 nearest pixels coordinates.\n\n\nInterpolate along an axis (here we choose the x-axis) initially using the linear interpolation \nfunction to obtain intermediate points.\n\n\nNow interpolate along the intermediate points (i.e you are interpolating along y-axis)\n\n\nNote: Take care of image origin and pixel spacing for the input coordinates before you perform any of the steps.\nAlso, always use atIndex and setIndex for accessing the image values. \nThis ensures that we handle the values at boundary correctly.\n\n\n\n\nExample:\nHere we look at a single point to understand how to implement our algorithm\n\n\nIf we have an input $(x,y) = (0.4,0.4)$, then the 4 nearest pixel coordinates are $(0,0)$,$(1,0),(1,1),(0,1)$\n\n\nInterpolating the values between the points $a = (0,0)$, $b = (1,0)$, find the intermediate \nvalue at point $I_1 = (0.4,0)$.\n\n\nSimilarly interpolate between $c = (0,1)$ and $d = (1,1)$ to find the intermediate value at point $I_2 = (0.4,1)$.\n\n\nNow we can just use the values at the intermediate points $I_1 = (0.4,0)$ and $I_2 = (0.4,1)$ and \nperform a linear interpolation in the y direction to obtain the final result at $(0.4,0.4)$.\n\n\n\n\n2. Image Transformation\n5 Points\nNow we can start with the implementation of ImageTransformer class.\n\nThe class consists of the following member functions for translation\n\n// Transformation parameters\npublic float shiftX; // tx\npublic float shiftY; // ty\npublic float rotation; // theta\npublic float scale; // s\n\n\n\nAlso use the interface ImageFilter abstract class which you have implemented in the previous exercises. \nThis can be done using implements keyword.\n\n\nAdd the method apply(Image input,Image output) which takes in two variables input and \noutput of Image class type. The input variable provides the input image to our transformer class. \nThe output variable is where the transformed image is stored.\n\n\nConsider each pixel in the image with index $(i,j)$. When we access an image pixel we get \nthe pixel intensity stored at the location $(i,j)$.\n\n\nHere $(i,j)$ represents the image coordinates $(x,y)$ and the pixel value at $(i,j)$ represents $f(x,y)$.\n\n\nWe want to transform $(x,y) \\to (x',y')$ and find the pixel value at the new location for a \ngiven set of input transformation parameters $t_x,t_y,\\theta,s$ to transform the input image coordinate $(x,y)$.\n\n\nLet us go over a possible approach to implement the apply method which \nimplements (translation,rotation and scaling). In addition, once we have the transformed coordinates $(x',y')$ we \ninterpolate the value at this coordinate to set the output value of the new image.\n\n\nWe can implement the transformations and interpolation using the equations defined \nin the theory section. \n\n\nHowever, from the implementation perspective it is much easier to ask what will be my output image value \nat the current position $(x',y')$ for the given transformations parameters.\n\n\nFor this we need to find the input coordinate $(x,y)$ for the given transformation parameters.\nThis mapping from $(x',y') \\to (x,y)$ is known as the inverse transformation.\n\n\nJust to recap our current aim is to iterate over the output image along each \npixel $(i,j)$ (also referred as $(x',y')$) and find the inverse transformation (x,y).\nOnce we find $(x,y)$ we can just interpolate the values in the input image at $(x,y)$ and\nset it to the output image value at (x',y').\n\n\nAn example code to accomplish this looks like below:\n\n\n// We need to compute (x,y) from (x',y')\n// We use xPrime,yPrime in the code to indicate (x',y')\n// Interpolate the values at (x,y) from the input image to get\nfloat pixelValue = input.interpolatedAt(x,y);\n\n// Set your result at the current output pixel (x',y')\noutput.setAtIndex(xPrime, yPrime, pixelValue);\n\n\n\n\nThe inverse transformations can be computed using the following equations.\n\n\nTranslation\n\n$x = x' - t_x$\n$y = y' - t_y$ \n\n\n\nRotation\n\n$x= x' \\cos\\theta + y' \\sin\\theta$\n$y= - x \\sin\\theta + y' \\cos\\theta$\n\n\n\nScaling\n\n$x= \\frac{x'}{s}$\n$y= \\frac{y'}{s}$\n\n\n\nImplementation detail Now you can directly use the above equations to implement translation, rotation and scaling.\nThe entire apply method for the ImageTransformer class can be implemented as follows:\n\n\nIterate over each pixel in the output image (although they are just the same as input initially).\n\n\nAt each pixel the index $(i,j)$ represents our coordinates $(x',y')$ of the output image\n\n\nApply the transformations using the equations described above to find $(x,y)$\n\n\nNow set the output image value at $(i,j)$ (also referred as (x',y')) from the interpolated values at $(x,y)$ \nfrom the input image.\n\n\nUse the setIndex() for setting the values of the output image and atIndex() for getting the values \nfrom input image.\n\n\nIn the above formulation we assume that we have pixel spacing of $spacing = 1.0$ and the \nimage origin at $(x_0, y_0) = (0,0)$.\n\n\nYou can extend this to work for different values of pixel spacing and origin.\n\n\nHint: Think of pixel spacing as a scaling and origin as a translation transformation. \n(apply both spacing and origin transformation to the input coordinates $(x,y)$ as $(x * px , y * py) + (x_0,y_0))$ \n\n\n\n\n" }, { "title": "Exercise 3", "url": "https://mt2-erlangen.github.io/archive/2020/exercise-3/", "body": "Submission deadline: 25.05.20 23:59h\nPlease ensure that all files you created also contain your name and your IDM ID and also your partner's name and IDM ID if you're not working alone.\nEach exercise has 10 points. You have to achieve 30 of 60 points in six homework exercises to pass the module.\nImages and 2-d Convolution\nIn this exercise, we finally to work with images. It's time to update the file src/main/java/lme/DisplayUtils.java to the newest version.\nThis should provide you the following methods to work with images:\n // Open a file\n public static mt.Image openImage(String path) \n\n // Download and open a file from the internet\n public static mt.Image openImageFromInternet(String url, String filetype) \n\n // Save an image to a file\n public static void saveImage(mt.Image image, String path) \n\n // Show images\n public static void showImage(float[] buffer, String title, int width) \n public static void showImage(float[] buffer, String title, long width, float[] origin, double spacing, boolean replaceWindowWithSameName)\n\n\nThey all work with the class mt.Image so let's create it!\nBefore that, add the following two methods to your Signal class (they are used by the tests of this exercise):\n // Needs: import java.util.Random\n public void addNoise(float mean, float standardDeviation) {\n\tRandom rand = new Random();\n\tfor (int i = 0; i < buffer.length; i++) {\n\t buffer[i] += mean + rand.nextGaussian() * standardDeviation;\n\t}\n }\n\n public void setBuffer(float[] buffer) {\n\tthis.buffer = buffer;\n }\n\nPS: The method addNoise is also useful to test your mean and standardDeviation calculation in exercise 2.\nCreate a long signal and add noise with a specific mean and standardDeviation.\nThe result of your mean and standardDeviation method should be approximatelly the same.\nmt/Image.java\n4 Points\nThe code for this section should go to src/main/java/mt/Image.java\nOur goal is to share as much code with our mt.Signal class. So mt.Image will be a subclass of mt.Signal.\n// <your name> <your idm>\n// <your partner's name> <your partner's idm> (if you submit with a group partner)\npackage mt;\n\nimport lme.DisplayUtils;\n\npublic class Image extends Signal {\n\n\n}\n\nmt.Image has five members (apart from the ones inherited by mt.Signal).\n // Dimensions of the image\n protected int width; \n protected int height; \n\n // Same as Signal.minIndex but for X and Y dimension\n protected int minIndexX;\n protected int minIndexY;\n\n // For exercise 4 (no need to do anything with it in exercise 3)\n protected float[] origin = new float[]{ 0, 0 };\n\nAnd two constructors:\n // Create an image with given dimensions\n public Image(int width, int height, String name)\n\n // Create an image with given dimensions and also provide the content\n public Image(int width, int height, String name, float[] pixels)\n\nAs shown in the exercise slides, we will store all the pixels in one array, like we did in Signal.\nThe array should have the size width * height.\nminIndexX,minIndexY should be 0 for normal images.\n\n\nCall the constructors of the super class Signal in the constructors of Image.\nYou can call the constructor of a super class by placing super(...) with the respetive arguments in the first line of the constructor of the subclass.\nThe constructor public Image(int width, int height, String name, float[] pixels) does not need to create its own array (take pixels for buffer).\nBut you can check whether pixels has the correct size.\nLet's also provide some getters!\n // Image dimensions\n public int width()\n public int height()\n\n // Minimum and maximum indices (should work like Signal.minIndex/maxIndex)\n public int minIndexX()\n public int minIndexY()\n public int maxIndexX()\n public int maxIndexY()\n\natIndex and setAtIndex should work like in Signal except that they now have two coordinate indices.\natIndex should return 0.0f if either the x or y index are outside of the image ranges.\n public float atIndex(int x, int y)\n public void setAtIndex(int x, int y, float value) {\n\nRemember how we calculated the indices in the exercise slides. You have to apply that formula in atIndex/setAtIndex.\n\n\nAdd the method show to display the image\n public void show() {\n DisplayUtils.showImage(buffer, name, width(), origin, spacing(), /*Replace window with same name*/true);\n }\n\nOpen the image pacemaker.png in a file src/main/java/exercise/Exercise03 (in the same project as previous exercise):\n// <your name> <your idm>\n// <your partner's name> <your partner's idm> (if you submit with a group partner)\npackage exercises;\n\nimport mt.GaussFilter2d;\nimport mt.Image;\n\npublic class Exercise03 {\n public static void main(String[] args) {\n (new ij.ImageJ()).exitWhenQuitting(true);\n\n Image image = lme.DisplayUtils.openImageFromInternet("https://mt2-erlangen.github.io/pacemaker.png", ".png");\n image.show();\n\n }\n}\n\nThe image is from our open access book.\n\nmt.ImageFilter\n3 Points:\nLike in Exercise 1, we want to be able to convolve our image signal.\nInfact, we will learn a lot of new ways to process images.\nOften, we need to create an output image of same size.\nLet's create an interface (src/main/java/mt/ImageFilter.java) for that, so we only need to implement this once.\npackage mt;\n\npublic interface ImageFilter {\n default mt.Image apply(mt.Image image) {\n Image output = new Image(image.width(), image.height(), image.name() + " processed with " + this.name());\n apply(image, output);\n return output;\n }\n\n default void apply(mt.Image input, mt.Image output) {\n throw new RuntimeException("Please implement this method!");\n }\n\n String name();\n}\n\nThe code for the convolution should go to src/main/java/mt/LinearImageFilter.java\nOk. Now the convolution. The class has already a method that we will need later. It uses your sum method.\n// <your name> <your idm>\n// <your partner's name> <your partner's idm> (if you submit with a group partner)\npackage mt;\n\npublic class LinearImageFilter extends Image implements ImageFilter {\n\n public void normalize() {\n\tdouble sum = sum();\n\tfor (int i = 0; i < buffer.length; i++) {\n\t buffer[i] /= sum;\n\t}\n }\n}\n\nCreate a constructor for it. Recall how we implemented LinearFilter!\nminIndexX and minIndexY need to be set to $-\\lfloor L_x/2 \\rfloor$ and $-\\lfloor L_y/2 \\rfloor$ when $L_x$ is the\nfilter's width and $L_y$ the filter's height.\n public LinearImageFilter(int width, int height, String name)\n\nConvolution in 2-d works similar to convolution in 1-d.\n$$K_x = \\lfloor L_x/2 \\rfloor$$\n$$K_y = \\lfloor L_y/2 \\rfloor$$\n$$g[x,y] = \\sum_{y'=-K_y}^{+K_y} \\sum_{x'=-K_x}^{+K_x} f[x-x', y-y'] \\cdot h[ x', y' ] $$\n$$g[x,y] = \\sum_{y'=\\text{h.minIndexY}}^{\\text{h.maxIndexY}} \\sum_{x'=\\text{h.minIndexX}}^{\\text{h.maxIndexX}} f[x-x', y-y'] \\cdot h[ x', y' ] $$\nRemember to use atIndex and setAtIndex to get and set the values.\nImplement the convolution in the method apply.\nThe result image was already created by our interface ImageFilter.\n public void apply(Image image, Image result)\n\n\n\nSource: https://github.com/vdumoulin/conv_arithmetic\nNow it's time to test!\nUse the file src/test/java/mt/LinearImageFilterTests.java.\nGauss Filter\n2 Points\nThe code for the Gauss filter should go to src/main/java/mt/GaussFilter2d.java.\nThe Gauss filter is a LinearImageFilter with special coefficients (the filter has the same height and width).\n// <your name> <your idm>\n// <your partner's name> <your partner's idm> (if you submit with a group partner)\npackage mt;\n\npublic class GaussFilter2d extends LinearImageFilter {\n \n}\n\nIt has the following constructor\n public GaussFilter2d(int filterSize, float sigma)\n\nIn the constructor, set the coefficients according to the unormalized 2-d normal distribution with standard deviation $\\sigma$ (sigma).\nMath.exp is the exponetial function. Use setAtIndex: $x$ should run from minIndexX to maxIndexX and $y$ from minIndexY to maxIndexY.\n$$ h[x,y] = \\mathrm{e}^{-\\frac{x^2+y^2}{2 \\sigma^2}}$$\nCall normalize() at the end of the constructor to ensure that all coefficients sum up to one.\nTest your Gauss filter in Exercise03.java.\nUse arbitray values for sigma and filterSize.\nThe Gauss filter will blur your input image clearly if you chose a large value for sigma.\n\nThere is also a unit test file that you can use: src/test/java/mt/GaussFilter2dTests.java\nCalculating with Images\n1 Points\nThe code for this section should go to src/main/java/mt/Image.java.\nImplement the method Image.minus in Image.java that subtracts the current image element-wise with another one and returns the result:\n public Image minus(Image other)\n\nWe use this method to calculate error images.\nYou can implement this with only one loop over the elements of the buffers of the two images.\nDemo\nThis is not required for the exercise!\nPlace the file src/main/java/exercises/Exercise03Demo.java\nin your project folder and run it.\n\nYou should see an interactive demo applying your Gauss filter to a noisy image.\nYou change change the used parameters.\nSubmitting\nPlease ensure that all files you created also contain your name and your IDM ID and also your partner's name and IDM ID if you're not working alone.\nThen, compress your source code folder src to a zip archive (src.zip) and submit it on studOn.\n\n\n\n\n\n" }, { "title": "Exercise 2", "url": "https://mt2-erlangen.github.io/archive/2020/exercise-2/", "body": "\n\n\n\n\nSubmission deadline: 18.05.20 23:59h\nPlease ensure that all files you created also contain your name and your IDM ID and also your partner's name and IDM ID if you're not working alone.\nEach exercise has 10 points. You have to achieve 30 of 60 points in six homework exercises to pass the module.\nStatistical Measures\nIn this exercise, we want to have a look on how we can analyze signals using simple statistical measures.\nWe will use a freely available ECG data set with the goal to distinguish healthy from patients with heart rythm problems.\n\nYou can find the original data set here\nbut we recommend to use a post-processed version available on studOn.\nGradle Build System\nIn Medizintechnik II we use the build system Gradle.\nGradle is especially popular for Android projects since it's easy to add new software dependencies that will be automatically\ndownloaded.\nIn our case, the published data set is saved as Matlab *.mat files.\nTo read those files, an external dependency was already added to our build.gradle file.\n implementation 'us.hebi.matlab.mat:mfl-core:0.5.6'\n\ndoes the magic and automatically downloaded a *.mat file reader.\nIn case, you need to add external software to your own projects you can use this search engine.\nTasks\nLoading one of File of the Data Set\nLoad the file src/main/java/exercises/Exercise02.java (available here (Click the raw button)) into your existing project.\nIt alread contain some code for parsing the program parameters:\n public static void main(String[] args) throws IOException {\n\t(new ij.ImageJ()).exitWhenQuitting(true);\n\n\tSystem.out.println("Started with the following arguments:");\n\tfor (String arg : args) {\n\t System.out.println(arg);\n\t}\n\n\tif (args.length == 1) {\n\t File file = new File(args[0]);\n\t if (file.isFile()) {\n\t\t// Your code here:\n\n\n\t } else {\n\t\t System.err.println("Could not find " + file);\n\t }\n\n\t} else {\n\t System.out.println("Wrong argcount: " + args.length);\n\t System.exit(-1);\n\t}\n\nLaunch Exercise02 with the one of the files of the data set as an argument (e.g. <where_you_saved_your_data_set>/MLII/1 NSR/100m (0).mat)!\n\nHow to do that in Eclipse\nHow to do that in IntelliJ\n\nYour program should print now the file name you selected:\n\nRemember to never put file names directly in your code. Your program will then only work on your machine!\nLet's open this file!\nif (file.isFile()) {\n // A file should be opened \n us.hebi.matlab.mat.types.Matrix mat = Mat5.readFromFile(file).getMatrix(0);\n Signal heartSignal = new mt.Signal(mat.getNumElements(), "Heart Signal");\n for (int i = 0; i < heartSignal.size(); ++i) {\n\t heartSignal.buffer()[i] = mat.getFloat(i);\n }\n heartSignal.show();\n\n\n} else if (file.isDirectory()) {\n\nYou should now see the signal. However, this plot does not have any labels with physical units attached.\nWe will change that later.\n\nExtension of Signal.java\n4 Points\nTo analyze this and other signals, we will extend our Signal class.\nPlease implement the following methods in Signal.java that calculate some descriptive properties of the signal:\n public float min() //< lowest signal value\n public float max() //< largest signal value\n public float sum() //< sum of all signal values\n public float mean() //< mean value of the signal\n public float variance() //< variance of the signal\n public float stddev() //< standard deviation of the signal\n\nTest the methods in your main function and check whether the calculated values seem plausible\nby looking at your plot and printing the calculated values.\nPhysical Dimensions\n1 Points\nThe code for this section belong to Signal.java\nIn the last exercise, we treated signals as pure sequence of numbers without any physical dimensions.\nBut for medical measurements physical dimensions are important.\nWe want to extend our plot to look like this with the horizontal axis labeled with seconds:\n\nTo do this we will add a new member to our signal that's describing the physical distance between two samples\n protected float spacing = 1.0f; //< Use 1.0f as a default when we don't set the physical distance between points\n\nAdd also a setter and getter method\n public void setSpacing(float spacing) \n public float spacing() \n\nRead in the discription of the data set the sampling frequency of the signal\nand use it to calculate the spacing between two samples. Set this property setSpacing in the main method.\nNext, we want to change show() to regard our spacing and to accept a ij.gui.Plot so that we can set the axis of our plot.\n public void show(Plot plot) {\n\t DisplayUtils.showArray(buffer, plot, /*start of the signal=*/0.f, spacing);\n }\n\nBecause we are lazy, we can still keep the original usage of show()\n public void show() {\n\t DisplayUtils.showArray(buffer, name, , /*start of the signal=*/0.f, spacing);\n }\n\nPlease create an instance of ij.gui.Plot in the main method of Exercise02 with descriptive labels for both axis and use if for heartSignal.show(...).\n\n// Constructs a new Plot with the default options.\nPlot plot = new Plot("chosee title here", "choose xLabel here", "choose yLabel here")\nheartSignal.show(plot);\n\n//... add here more plotting stuff\n\nplot.show()\n\nDetermine the Heart Frequency\n5 Points\nThe remainder of this exercise will be implemented in Exercise02.java\nCreate a file src/main/java/lme/HeartSignalPeaks.java with following content\npackage lme;\n\nimport java.util.ArrayList;\n\npublic class HeartSignalPeaks {\n\tpublic ArrayList<Double> xValues = new ArrayList<Double>();\n\tpublic ArrayList<Double> yValues = new ArrayList<Double>();\n}\n\nArrayList behave like arrays, except you can add new items to make it longer. You can read more about them here.\nWe now want to find the peaks of the heart signal. We do that by finding local maxima within region that are above a certain\nthreshold (here in blue).\nFind a good value of this threshold so that all peaks are above this value.\nYou may use mean(), max(), min() to calculate it.\nYou can see your threshold by ploting it:\n plot.setColor("blue");\n plot.add("lines", new double[] { 0, /* a high value */10000 }, new double[] { threshold, threshold });\n\n\nImplement the following method that finds all peaks of the signal.\n public static lme.HeartSignalPeaks getPeakPositions(mt.Signal signal, float threshold)\n\nTo determine the signal peaks, one can use normal maximum search over the signal values.\nSave the found maximum value (i.e. signal amplitude) in x(max) and\nthe location of maximum (i.e. the time at which the peak occurs) in y(arg max).\nYou can implement the peak finding method as follows:\n\n\nLoop over the signal and at each index\n\n\nUse boolean variable to determine if the current signal value is above the threshold.\n\n\nIf the previous signal value was above the threshold (i.e boolean value was true), and the current value is below threshold (i.e boolean value is false)\n\n\nAdd the previous signal value as a instance of HeartSignalPeaks (like peaks.xValues and peaks.yValues)\n\n\nThis is a suggested workflow, but feel free to use your own ideas to efficently find the peaks of the signal.\nYou can plot the peaks you have found:\n plot.setColor("red");\n plot.addPoints(peaks.xValues, peaks.yValues, 0);\n\nNext, create a Signal with the difference in time between succesive peaks (import import java.util.ArrayList;). \n\tpublic static mt.Signal calcPeakIntervals(lme.HeartSignalPeaks peaks) {\n\t\tArrayList<Double> peakPositions = peaks.xValues;\n\t\tif (peakPositions.size() > 1) {\n\t\t\tSignal intervals = new mt.Signal(peaks.xValues.size() - 1, "Peak Intervals");\n\n\t\t\tfor (int i = 0; i < peakPositions.size() - 1; ++i) {\n\t\t\t\tintervals.buffer()[i] = (float) (peakPositions.get(i + 1) - peakPositions.get(i));\n\t\t\t}\n\t\t\treturn intervals;\n\t\t} else {\n\t\t\treturn new mt.Signal(1, "No Intervals found");\n\t\t}\n\t}\n\nYou can use that signal to determine the mean cycle duration (peakIntervals.mean()), the mean heart frequency ((1. / intervals.mean())) and\nbeats per minute (60. * 1. / intervals.mean()). Print those values!\nSummary of tasks\nTo summarize the list of tasks that needs to be implemented to complete this exercise\n\nSet the file path correctly to load the signal into your program (Ensures you can load the signal inside the program)\nAdd labels to the plot, include spacing variable in signal class for visualizing plots in physical dimensions.\nImplement methods to compute statistical measures (like mean, median,...). (Use the formula provied in lecture/exercise slides)\nDetermine the threshold (follow the description provided here)\nFind the peaks (follow the description provided here)\nCalculate intervals between the peaks\n\nNote\nWhile setting file path as arguments, add "path" if there are spaces in file name since java parses space as new arguments.\nBonus\nThis is not required for the exercise.\nRun Exercise02 with other files of the data set as an argument.\nWhat is the meaning of the mean value and the variance of time distance between the peeks?\nHow do signals with low variance in the peak distances look like and how signals with high variance?\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n" }, { "title": "Exercise 1", "url": "https://mt2-erlangen.github.io/archive/2020/exercise-1/", "body": "Signals and Convolution\nSubmission deadline: 11.05.20 23:59h\nPlease ensure that all files you created also contain your name and your IDM ID and also your partner's name and IDM ID if you're not working alone.\nEach exercise has 10 points. You have to achieve 30 of 60 points in six homework exercises to pass the module.\nImageJ\nThe image processing program we want to use during this semester is called ImageJ.\nIt was developed at the US National Institutes of Health and is used nowadays especially in research \nfor medical and biological images.\nIf you want to, you can download a stand-alone version of the program here.\nGetting started\nImageJ can also be used as a Java library.\nWe already created a Java project that uses ImageJ.\nYou can download it from https://github.com/mt2-erlangen/exercises-ss2021 and import with the IDE of your choice:\n\nInstructions for Eclipse\nInstructions for IntelliJ\n\nTasks\n\n\nYou should now be able to execute the file src/main/java/exercises/Exercise01.java\n\n\nThe following code is opening the ImageJ main window and exits the running program when the window is closed.\npublic class Exercise01 {\n public static void main(String[] args) {\n (new ij.ImageJ()).exitWhenQuitting(true);\n\n }\n}\n\nIntelliJ will only allow to run Exercise01 when there are no errors in the project. You can just out-comment the method lme.Algorithms.convolution1d until you implemented your Signal class.\nSignal.java\n4 Points\nAs a first step, we will implement the class Signal \nwhich should hold a signal of finite length.\nCreate the file src/main/java/mt/Signal.java.\n// <your name> <your idm>\n// <your partner's name> <your partner's idm> (if you submit with a group partner)\npackage mt;\n\nimport lme.DisplayUtils;\nimport ij.gui.Plot;\n\npublic class Signal {\n\n}\n\nSignal should have the following members\n protected float[] buffer; // Array to store signal values\n protected String name; // Name of the signal\n protected int minIndex; // Index of first array element (should be 0 for signals)\n\nImplement two constructors for Signal\n public Signal(int length, String name) // Create signal with a certain length (set values later)\n public Signal(float[] buffer, String name) // Create a signal from a provided array\n\nImplement the following getter methods for Signal\n public int size() // Size of the signal\n public float[] buffer() // Get the internal array \n public int minIndex() // Get lowest index of signal (that is stored in buffer)\n public int maxIndex() // Get highest index of signal (that is stored in buffer)\n public String name() // Get the name of the signal\n\nNext, we want to visualize our Signal in the method show. You can use provided function lme.DisplayUtils.showArray.\nTo test it, create a Signal with arbitray values in the main method of Exercise01 and call its show method.\n public void show() {\n DisplayUtils.showArray(this.buffer, this.name, /*start index=*/0, /*distance between values=*/1);\n }\n\nIn our black board exercises, we agreed that we want to continue our signals with zeros where we don't have any values stored.\nIf we access indices of our Signal with values smaller than minIndex() or larger maxIndex() we want to return 0.0f.\nIf a user accesses an index between minIndex() and maxIndex() we want to return the corresponding value stored in our array.\n\nImplement the method atIndex and setAtIndex. Please be aware that minIndex can be smaller than 0 for subclasses of Signal.\nIf setAtIndex is called with an invalid index (smaller than minIndex or greater than maxIndex), it's ok for the program to crash.\nThis should not happen for atIndex.\n public float atIndex(int i)\n public void setAtIndex(int i, float value)\n\nYou can check the correctness of atIndex/setAtIndex with the test testAtIndex in file src/test/java/SignalTests.java.\nLinearFilter.java\n3 Points\nImplement LinearFilter in file src/main/java/LinearFilter.java as a subclass of Signal.\nLinearFilter should work like Signal except its minIndex should be at - floor(coefficients.length/2) as in the exercise slides.\n\nLinearFilter should have a constructor that checks that coefficients is an array of odd size or throws an error otherwise (any error is ok).\n public LinearFilter(float[] coefficients, String name)\n\nand a method that executes the discrete convolution on another Signal input and returns an output of same size.\n public Signal apply(Signal input);\n\nYou should be able to directly use the formula from the exercise slides (f is the input signal, h our filter, $L$ the filter length)\n$$K = \\lfloor L/2 \\rfloor$$\n$$g[k] = \\sum_{\\kappa=-K}^{K} f[k-\\kappa] \\cdot h[ \\kappa ]$$\nor with our minIndex/maxIndex methods for each index $k$ of the output signal.\n$$g[k] = \\sum_{\\kappa=h.\\text{minIndex}}^{h.\\text{maxIndex}} f[k-\\kappa] \\cdot h[\\kappa] $$\nBe sure that you use atIndex to access the values of input and the filter.\n\nYou can test your convolution function with the tests provided in src/test/java/LinearFilterTests.java.\nGood test cases are:\n\n{0,0,1,0,0}: this filter should not change your signal at all\n{0,1,0,0,0}: this filter should move your signal one value to the left\n{0,0,0,1,0}: this filter should move your signal one value to the right\n\nQuestions\n3 Points\nIn this task we want to convolve a test Signal with three different linear filters.\nFilter the signal $f[k]$ Signal(new float[]{0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0}, "f(k)")\nwith filters\n\n$h_1[k]$: {1.0f/3 ,1/3.f ,1/3.f},\n$h_2[k]$: {1/5.f, 1/5.f , 1/5.f, 1/5.f, 1/5.f},\n$h_3[k]$: {0.5f, 0, -0.5f}.\n\nSave the images of the input signal and filtered results (recommended filetype: png).\nCreate a PDF document (e.g. with Word or LibreOffice) with those images in which you describe briefly how the filters modified the input signal and why.\nSubmitting\nPlease ensure that all files you created also contain your name and your IDM ID and also your partner's name and IDM ID if you're not working alone.\nThen, compress your source code folder src to a zip archive (src.zip) and submit it and your PDF document via StudOn!\n" } ]