CycleGAN online

Bereit für die nächste Tour? Bei uns findest du viele Giant Artikel! Entdecke neben Giant zahlreiche weitere Top Bike-Marken und kaufe direkt online CycleGAN should only be used with great care and calibration in domains where critical decisions are to be taken based on its output. This is especially true in medical applications, such as translating MRI to CT data. Just as CycleGAN may add fanciful clouds to a sky to make it look like it was painted by Van Gogh, it may add tumors in medical. CycleGAN is an architecture to address this problem, and learns to perform image translations without explicit pairs of images. To learn horse to zebra translation, we only require: No one-to-one image pairs are required. CycleGAN will learn to perform style transfer from the two sets despite every image having vastly different compositions CycleGAN for interpretable online EMT compensation. Henry Krumb 1, Dhritimaan Das 2, Romol Chadda 1 & Anirban Mukhopadhyay 1 International Journal of Computer Assisted Radiology and Surgery volume 16, pages 757-765 (2021)Cite this articl Methods Our online compensation strategy exploits cycle-consistent generative adversarial neural networks (CycleGAN). Positions are translated from various bedside environments to their bench equivalents, by adjusting their z-component

fahrrad.de - Dein Fahrrad Shop - Giant Cycle

  1. Methods: Our online compensation strategy exploits cycle-consistent generative adversarial neural networks (CycleGAN). 3D positions are translated from various bedside environments to their bench equivalents
  2. As mentioned earlier, the CycleGAN works without paired examples of transformation from source to target domain. Recent methods such as Pix2Pix depend on the availaibilty of training examples where the same data is available in both domains. The power of CycleGAN lies in being able to learn such transformations without one-to-one mapping between training data in source and target domains
  3. CycleGAN. After seeing the horse2zebra gif above, most of you would be thinking of a following approach : Prepare a dataset of Horses and Zebras in the same environment, in exactly the same.

CycleGAN Project Page - GitHub Page

  1. Cyclegan application. Compared with pix2pix model, cyclegan does not need to train paired data, so it has a wider range of applications. Cyclegan can realize the transformation of image style, but it is different from the natural style transfer model, which only realizes the transformation of single work style (such as starry sky)
  2. CycleGAN is the most popular algorithm and belongs to the set of algorithms called generative models and these algorithms belong to the field of unsupervised learning technique. This model was first described by Jun-Yan Zhu, in the year 2017. As we know, to resolve the image-to-image translation problem, the algorithm needs to learn the mapping.
  3. Hand-on Implementation of CycleGAN, Image-to-Image Translation using PyTorch. 06/12/2020. A CycleGAN is designed for image-to-image translation, and it learns from unpaired training data. It gives us a way to learn the mapping between one image domain and another using an unsupervised approach. Register
  4. ators (Dx and Dy): The generator mapping functions are as follows:where X is the input image distribution and Y is the desired output distribution (such as Van Gogh styles)
  5. The original CycleGANs paper, Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks , was published by Jun-Yan Zhu, et al. The accompanying code was written in Torch and hosted on GitHub. However, for our Getty Images hackfest, we decided to implement a CycleGAN in TensorFlow which can be trained and hosted on Azure
  6. A CycleGAN learns forward and inverse mappings simultaneously using adversarial and cycle-consistency losses. This makes it possible to find an optimal pseudo pair from non-parallel data. Furthermore, the adversarial loss can bring the converted speech close to the target one on the basis of indistinguishability without explicit density estimation
  7. CycleGAN is a technique for training unsupervised image translation models via the GAN architecture using unpaired collections of images from two different domains. CycleGAN has been demonstrated on a range of applications including season translation, object transfiguration, style transfer, and generating photos from paintings

This is made possible through the image-to-image translation models like CycleGAN in the arcgis.learn module of ArcGIS API for Python. Earth observation is an important, yet challenging task, especially on cloudy days. SAR to RGB image translation using models like CycleGAN can be a great tool to overcome the limitations of optical imagery Explore and run machine learning code with Kaggle Notebooks | Using data from I'm Something of a Painter Mysel Understand how unpaired image-to-image translation differs from paired translation, learn how CycleGAN implements this model using two GANs, and implement a CycleGAN to transform between horses and zebras! Welcome to Week 3 0:57. Unpaired Image-to-Image Translation 3:43


GitHub is where over 65 million developers shape the future of software, together. Contribute to the open source community, manage your Git repositories, review code like a pro, track bugs and features, power your CI/CD and DevOps workflows, and secure code before you commit it How to Implement CycleGAN Models From Scratch With Keras. The Cycle Generative adversarial Network, or CycleGAN for short, is a generator model for converting images from one domain to another domain. For example, the model can be used to translate images of horses to images of zebras, or photographs of city landscapes at night to city.

CycleGAN for interpretable online EMT compensation

CycleGAN-VC. We propose a non-parallel voice-conversion (VC) method that can learn a mapping from source to target speech without relying on parallel data. The proposed method is particularly noteworthy in that it is general purpose and high quality and works without any extra data, modules, or alignment procedure Title: CycleGAN for Interpretable Online EMT Compensation. Authors: Henry Krumb, Dhritimaan Das, Romol Chadda, Anirban Mukhopadhyay. Download PDF Abstract: Purpose: Electromagnetic Tracking (EMT) can partially replace X-ray guidance in minimally invasive procedures, reducing radiation in the OR. However, in this hybrid setting, EMT is disturbed. Objective: To investigate the image quality of ultralow-dose CT (ULDCT) of the chest reconstructed using a cycle-consistent generative adversarial network (CycleGAN)-based deep learning method in the evaluation of pulmonary tuberculosis. Materials and methods: Between June 2019 and November 2019, 103 patients (mean age, 40.8 ± 13.6 years; 61 men and 42 women) with pulmonary tuberculosis were. Pre-trained models and datasets built by Google and the communit To train CycleGAN model on your own datasets, you need to create a data folder with two subdirectories trainA and trainB that contain images from domain A and B. You can test your model on your training set by setting phase='train' in test.lua. You can also create subdirectories testA and testB if you have test data

When called, takes in input batch of real images from both domains and outputs fake images for the opposite domains (with the generators). Also outputs identity images after passing the images into generators that outputs its domain type (needed for identity loss). Attributes: 'G_A' ('nn.Module'): takes real input B and generates fake input A. CycleGAN Style Transfer examples. However, training GANs is extremely computationally expensive: the generation of High Resolution images is only possible with very high end hardware and long training time. I hope the tricks and techniques explained in this article will be able to help you in your HD image generation adventures CycleGAN / models / architectures.lua Go to file Go to file T; Go to line L; Copy path Copy permalink . Cannot retrieve contributors at this time. 384 lines (338 sloc) 17.8 KB Raw Blame Open with Desktop View raw View blame require ' nngraph '--. Let us tackle a simple problem that CycleGAN can address. In Chapter 3, Autoencoders, we used an autoencoder to colorize grayscale images rgb2gray(RGB) as discussed in Chapter 3, Autoencoders. Following on from that, we can use the grayscale train images as source domain images and the original color images as the target domain images

Keras implementation of CycleGAN. As discussed earlier in this chapter in the An Introduction to CycleGANs section, CycleGANs have two network architectures, a generator and a discriminator network. In this section, we will write the implementation for all the networks The CycleGAN with cycle consistency can generate more realistic and reliable images. The discriminator and the generator achieve a local optimal solution in an adversarial manner, and the generator and the classifier are in a cooperative manner to distinguish the domain of input images. A novel res-guided sampling block is proposed by combining. CycleGAN results. Based on about three days of training for about 100 epochs, the Cyclegan model seems to do a very nice job of adapting GTA to the real world domain. I really like how the smaller details are not lost in this translation and the image retains its sharpness even at such a low resolution In MMA-CycleGAN, the cycle-consistency loss and adversarial loss in CycleGAN are still used, but a mutual-attention (MA) mechanism is introduced, which allows attention-driven, long-range dependency modelling between the two image domains. Moreover, to efficiently deal with the large image size, the MA is further improved to the multi-head.

In this paper, we propose a novel underwater image enhancement method. Typical deep learning models for underwater image enhancement are trained by paired synthetic dataset. Therefore, these models are mostly effective for synthetic image enhancement but less so for real-world images. In contrast, cycle-consistent generative adversarial networks (CycleGAN) can be trained with unpaired dataset CycleGAN: Taking It Higher- Part 4. December 11, 2020. 0. 1865. In the previous blog, we continued our deep dive into the world of Generative Adversarial Networks (GANs) with the pix2pix GAN which we also went ahead and coded up for ourselves. We achieved quite the results on the Maps to Google Maps problem statement where we attempted to. CycleGAN is a type of unsupervised style transfer network. It is a modified implementation of GANs that has the ability to transfer the style of one picture to another. CycleGAN consists of two generators and two discriminators. The first generator of this network transfers the style from pictures labeled A to the pictures labeled B (2021). CycleGAN-based realistic image dataset generation for forward-looking sonar. Advanced Robotics: Vol. 35, Special Issue on Intelligent Autonomous Systems, pp. 242-254

Cone-beam computed tomography (CBCT) integrated with a linear accelerator is widely used to increase the accuracy of radiotherapy and plays an important role in image-guided radiotherapy (IGRT) Due to image quality limitations, online Megavoltage cone beam CT (MV CBCT), which represents real online patient anatomy, cannot be used to perform adaptive radiotherapy (ART). In this study, we used a deep learning method, the cycle-consistent adversarial network (CycleGAN), to improve the MV CBCT image quality and Hounsfield-unit (HU) accuracy for rectal cancer patients to make the. The trained CycleGAN was applied to all the test images, after appropriately scaling them to 256 × 256 pixels. Download : Download high-res image (238KB) Download : Download full-size image; Fig. 1. Schematic diagram of the architecture of the generator network (vanilla CycleGAN implementation) By learning the mapping from down-sampled in-plane LR images to original HR US images, cycleGAN can generate through-plane HR images from original sparely distributed 2D images. Finally, HR 3D US images are reconstructed by combining the generated 2D images from the two cycleGAN models. Result CycleGAN - Tensorflow 2 . Tensorflow 2 implementation of CycleGAN. Paper: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks Author: Jun-Yan Zhu et al. Exemplar results summer2winter. row 1: summer -> winter -> reconstructed summer, row 2: winter -> summer -> reconstructed winte

CycleGAN is adopted as the basic architecture for the unpaired and unannotated dataset. Moreover, multiple instances learning algorithms and the idea behind conditional GAN are considered to improve performance. To our knowledge, this is the first attempt to generate immunohistochemistry pathology microscopic images, and our method can achieve. import os import argparse import json import tinyms as ts import numpy as np import matplotlib.pyplot as plt from PIL import Image from tinyms import context, Tensor from tinyms.serving import start_server, predict, list_servables, shutdown, server_started from tinyms.data import GeneratorDataset, UnalignedDataset, GanImageFolderDataset, DistributedSampler from tinyms.vision import cyclegan. 上次帶大家體驗了pix2pix並且將它應用在繪圖板上,畫出輪廓透過pix2pix去自動填上顏色,是不是很有趣呢?接下來我們再來玩一個風格轉換的經典作品Cycle GAN,並且將圖片轉換成梵谷風格圖片,大家看到標題一定很好奇為什麼要叫做「偽」梵谷風格呢?看圖片會發現,總共只訓練了一百回合模型還沒. Non-parallel voice conversion (VC) is a technique for training voice converters without a parallel corpus. Cycle-consistent adversarial network-based VCs (CycleGAN-VC and CycleGAN-VC2) are widely accepted as benchmark methods. However, owing to their insufficient ability to grasp time-frequency structures, their application is limited to mel.

When using CycleGAN augmentation a dramatic increase of the Dice score for kidney segmentation is noted (from 0.09 to 0.66, for standard and CycleGAN augmentation, respectively, p < 0.001) M-08-04 - A Spatiotemporal Unpaired Deep Learning Method for Low-Dose Cardiac CT Image Denoising (#1019). J. Yang, S. Zhou, C. Li, L. Yu, J. Huang, M. Jin. Session: M-08 - Denoising and Segmentation Using Deep Learning Approaches Date: Thursday, 21 October, 2021, 10:00 AM Room: MIC - 2 M-05-189 - Deep Learning for MRI-based Attenuation Correction of Multitracer Brain PET Images (#866

PDF download and online access $49.00. Details. Unlimited viewing of the article/chapter PDF and any associated supplements and figures. Article/chapter can be printed. Article/chapter can be downloaded. Article/chapter can not be redistributed. Check out Abstract. Machine learning can produce promising results when sufficient training data are. Shanshui-DaDA is trained with CycleGAN (official PyTorch implementation) on 108 (later got expanded to 205 paintings) Shanshui paintings collected from online open data. The raw painting scans are pre-processed to 1772 pairs of edge-painting (sketches) and Shanshui paintings Welcome to TinyMS's documentation! ¶. Welcome to TinyMS's documentation! TinyMS is an Easy-to-Use deep learning development toolkit based on MindSpore , designed to providing quick-start guidelines for machine learning beginners This is explained in the original CycleGAN paper: For the discriminator networks we use 70 × 70 PatchGANs, which aim to classify whether 70 × 70 overlapping image patches are real or fake. Such a patch-level discriminator architecture has fewer parameters than a full-image discriminator and can work on arbitrarily sized images in a fully.

By visual inspection, we observed that the proposed PSL method can deliver a noise-suppressed and detail-preserved image, while the TV-based method would lead to the blocky artifact, the N2V method would produce over-smoothed structures and CT value biased effect, and the CycleGAN method would generate slightly noisy results with inaccurate CT. The style classifier scored CycleGAN highest, while the content classifier gave DualGAN the edge. Ganilla ranked highest when style and content scores were averaged. The researchers asked 48 people to (a) rate whether each GAN-made illustration looked like the illustrator's work, (b) describe what they thought the picture showed, and (c) rank. Thanks to the online community for exploring many applications of our work and pointing out typos and errors in the paper and code. This work was supported in part by NSF SMA-1514512, NGA NURI, IARPA via Air Force Research Laboratory, Intel Corp, Berkeley Deep Drive, and hardware donations by Nvidia CycleGAN, an unsupervised learning model, showed similar results as cGAN, although unpaired data were used. CycleGAN reconstructed both streak structures of the streamwise velocity and small strong structures of the wall-normal velocity, similar to the DNS field (CBCT) using CycleGAN for adaptive radiation therapy To cite this article: Xiao Liang et al 2019 Phys. Med. Biol. 64 125002 View the article online for updates and enhancements. Recent citations Adaptive radiotherapy for head and neck cancer Howard E. Morgan and David J. Sher-Visual enhancement of Conebeam CT by use of CycleGAN Satoshi Kida et al

Papers with Code - CycleGAN for Interpretable Online EMT

Human face expression recognition is an active research area that has massive applications in medical field, crime investigation, marketing, online learning, automobile safety and video games. The first part of this research defines a deep neural network model-based framework for recognizing the seven main types of facial expression, which are found in all cultures Objective. Papanicolaou and Giemsa stains used in cytology have different characteristics and complementary roles. In this study, we focused on cycle-consistent generative adversarial network (CycleGAN), which is an image translation technique using deep learning, and we conducted mutual stain conversion between Giemsa and Papanicolaou in cytological images using CycleGAN The experiments demonstrated the efficacy of CycleGAN in bridging the cross-domain gap, which significantly improved performance in terms of image retrieval and feature correspondence detection. With the 3D coordinates retrieved from BIM, the proposed method can achieve near real-time camera pose estimation with an accuracy of 1.38 m and 10.1. 13 Dec 2019 » Learning to Imitate Human Demonstrations via CycleGAN . 12 Dec 2019 » Model-Based Reinforcement Learning: Theory and Practice . 05 Dec 2019 » Data-Driven Deep Reinforcement Learning . 26 Nov 2019 » RoboNet: A Dataset for Large-Scale Multi-Robot Learning . 22 Nov 2019 » Prof. Anca Dragan Talks About Human-Robot Interaction for.

CycleGAN - GitHub Page

  1. Theoretical basis CycleGAN. CycleGAN [] is basically two mirrored GANs that form a ring network.The goal of CycleGAN is to convert image A to another domain to generate image A1 and convert A1 back to A, where output image A1 is similar to the original input image A to form a meaningful mapping that does not exist in the unpaired data set
  2. ator need to progress in unison. VAE is used as a CycleGAN-compatible alternative to staged learning (24,30,31). VAE facilitates convergence of the core network and attention mechanism while simultaneously allowing the discri
  3. The code for CycleGAN is similar, the main difference is an additional loss function, and the use of unpaired training data. CycleGAN uses a cycle consistency loss to enable training without the need for paired data. In other words, it can translate from one domain to another without a one-to-one mapping between the source and target domain

the optimized CycleGAN denoising model was 0.05-0.07 seconds per image. The details of the optimized CycleGAN network are provided in the Supplementary Materials. To validate the performance of our proposed methods, the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) were used for image quality evaluation [17] Symbolic Music Genre Transfer with CycleGAN. 09/20/2018 ∙ by Gino Brunner, et al. ∙ ETH Zurich ∙ 0 ∙ share . Deep generative models such as Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) have recently been applied to style and domain transfer for images, and in the case of VAEs, music

Introduction to CycleGANs

Similarities Let's first start with the similarities. Both the models described in the papers seek to find a mapping between a source domain and a target domain for a given image, while discovering this mapping without paired training data. Below,.. CycleGAN Monet-to-Photo Translation . Turn a Monet-style painting into a photo. Released in 2017, this model exploits a novel technique for image translation, in which two models translating from A to B and vice versa are trained jointly with adversarial training. In addition to the adversarial loss, cycle consistency is also enforced in the.

extracted 82 frames, and divided the data into train/val/test splits of 80/10/10. We then used an online photo editing tool called Birme to standardize image dimensions. For the CycleGAN method, for our training set, we have collected 53 frames from the 2D animate CycleGAN and DiscoGAN are pretty similar in terms of loss function but their neutral network structures are different. CycleGAN uses ResNet and DiscoGAN uses U-Net. In general, we found Cycl eGAN slightly outperforms DiscoGAN with the same number of epochs. DiscoGAN is more vulnerable from suffering and stuck in mode collapse.. Background: I'm currently using a variant of CycleGAN to preprocess synthetic imaging data before I use it for training another model. Shape-wise the real and synthetic datasets are very similar, but the synthetic data is a bit simpler than the real data and has some very vibrant colors not present in the real data The McAfee team used 1,500 photos of each of the project's two leads and fed the images into a CycleGAN to morph them into one another. At the same time, they used the facial recognition. For example, there are many freely available online tutorials on DCGAN and CycleGAN; much of the code in the book were also taken from open-sourced Github repo. Therefore, I went to buy a couple more books and found Image Generation with TensorFlow to be the best for my need as it covers not only the basics but also state-of-the-art models.

Cyclegan for image conversion Develop Pape

Release first version, supported models include Pixel2Pixel, CycleGAN, PSGAN. Supported applications include video frame interpolation, super resolution, colorize images and videos, image animation. Modular design and friendly interface. Communit For image patches of 128 × 128 pixels both the pix2pix and CycleGAN models have over forty-one million parameters for each generator network and over two million parameters for each discriminator in each view. The models use batch size of one for the input images, batch normalization, and the Adam optimizer (β 1 = 0.5, β 2 = 0.999) 35 for. Writing StyleGAN from scratch with TensorFlow to edit face (and CycleGAN, GauGAN, BigGAN and many more) (ProGAN) and StyleGAN to generate high definition portrait images. Most of the face generation AI you see online come from this family of model that grow the network progressively from low resolution of 4x4, 8x8, , to 1024x1024. In short, the core idea behind generative networks is capturing the underlying distribution of the data. This distribution can not be observed directly, but has to be approximately inferred from the training data. Over the years, many techniques have emerged that aim to generate data similar to the input samples A cCycleGAN is an extension of CycleGAN, which enables food category transfer among 10 types of foods and retain the shape of a given food. We experimentally show that 200 and 30,000 food images with the cCycleGAN enable a very natural food category transfer among 10 types of typical Japanese foods: ramen noodle, curry rice, fried rice, beef.

This clever AI hid data from its creators to cheat at itsFortnite Builder Simulator Unblocked - Jamey PersaudAutoblog de korben

Implementing CycleGAN for Image-to-Image Translation by

CycleGAN allows for the development of translation models in instances when no training datasets exist. Popular image filters that can map the features of a human face onto a cat are an example of this type of machine-learning technique. Researchers used satellite images from Seattle, Washington, and Beijing, China, to explore how AI could use. pytorch-CycleGAN-and-pix2pix - Image-to-image translation in PyTorch (e. 512. This is our PyTorch implementation for both unpaired and paired image-to-image translation. It is still under active development. The code was written by Jun-Yan Zhu and Taesung Park, and supported by Tongzhou Wang The CycleGAN is constrained by the probability distribution of data (multi-fractal property of texture features in this Letter), while other methods are constrained by the blur properties contained in pairs of training images. The detailed structure of the CycleGAN used in this Letter will be discussed in the next subsection Train your own CycleGAN style translator. I have a set of images (a few hundred) that represent a certain style and I would like to train an unpaired image to image translator with CycleGAN. I'm looking for a tutorial on how one would do this with NetTrain. For example, in the Wolfram Neural Net repository there is a NetModel for Photo-to-Van.

Hand-on Implementation of CycleGAN, Image-to-Image

In this paper, the diseased apple images are collected in two ways: orchard field collection and online collection. The healthy apple images are relatively easy to collect. In this paper, 500 healthy apple images and 140 anthracnose apple images are collected as datasets for training CycleGAN model bu sitedeki gibi. kendisi de dünyada en çok beğenilen kadınların fotolarını veri tabanına yükleyip yapay zeka yöntemiyle mükemmel bir kadın oluşturmayı planlıyor. daha sonra bu hanım efendiye az buçuk takipçili bir instagram hesabı satın alıp sıfır model ürünümüzü piyasaya sürüyor. cyclegan ise burda devreye.

Cycle Generative Adversarial Network (CycleGAN

--model 有 cycleGAN與pix2pix可選擇--direction 是風格轉換方向;要將輪廓填滿還是將填滿的轉成輪廓 6.執行結果. 最後結果將會輸出在pytorch-CycleGAN-and-pix2pix的results當中,有個別的圖檔也有作者整理在網頁上的比較圖,下圖擷取部分html上的結果 Cycle Generative Adversarial Network or CycleGAN is a technique for automatic training of image-to-image translation models without using paired examples. CycleGAN is made of two kinds of networks-discriminators and generators. While the discriminator classifies images as real or fake, generators create convincing fake images for both types. Unpaired cross-modality educed distillation (CMEDL) applied to CT lung tumor segmentation. 07/16/2021 ∙ by Jue Jiang, et al. ∙ Memorial Sloan Kettering Cancer Center ∙ 7 ∙ share . Accurate and robust segmentation of lung cancers from CTs is needed to more accurately plan and deliver radiotherapy and to measure treatment response CycleGAN Implementation - Connecting the Pieces 10. CycleGAN Implementation - Loss Functions 11. CycleGAN Training 12. CycleGAN Notes 13. Homework 14. These certificates are shareable proof that you completed an online course and are a great way to help you land that new job or promotion, apply to college, or simply share your achievements.

However, CycleGAN is able to learn such pair information without one-to-one mapping between training data in source and target domains. The challenge of this work is to propose a new architecture based on CycleGAN, which we call Blur2Sharp CycleGAN, for the task of text document deblurring Cycle-consistent adversarial networks (CycleGAN) were used to generate CT-based sMRIs. Similar to cosmetic surgery in the physical world, virtual face beautification is an emerging field with many open issues to be addressed. Contribute to jingyut/PairedCycleGan development by creating an account on GitHub CycleGAN architecture The most famous GAN architecture built for this goal may be CycleGAN , introduced in 2017 and widely used since then. While CycleGAN is very successful at translating between similar domains (similar shapes and contexts), such as from horses to zebras or from apples to oranges, it falls short when rained on very diverse. (arXiv:2107.06941v1 [cs.CV]) --> The CycleGAN framework allows for unsupervised image-to-image translation of unpaired data. In a scenario of surgical training on a physical surgical simulator, this method can be used to transform endoscopic images of phantoms into images which more closely resemble the intra-operative appearance of the same. CycleGAN-inspired Approaches Other Solutions Summary. Overview q. De nition and Motivation 2 Style transfer in text is the task of rephrasing the text from one style to another without changing other aspects of the meaning. In computer vision (CV), style transfer refer to changing image An architecture that uses a variational autoencoder-enhanced, attention-aware, cycle-consistent generative adversarial network (A-CycleGAN) for MR-to-CT image translation is described; this is the first time, to our knowledge, that an A-CycleGAN has been used to solve MR-to-CT image translation