The quality is significantly higher when narrowing the domain from "photos" in general. To de-install everything, you can just delete the #/pyvenv/ folder.Įxample #3 - Specialized super-resolution for faces, trained on HD examples of celebrity faces only. You'll also need to download this pre-trained neural network (VGG19, 80Mb) and put it in the same folder as the script to run. Python3 -m pip install -ignore-installed -r requirements.txtĪfter this, you should have pillow, theano and lasagne installed in your virtual environment. # Setup the required dependencies simply using the PIP module. # If you're using bash, make this the active version of Python. Python3 -m venv pyvenv -system-site-packages # Create a local environment for Python 3.x to install dependencies here. Here's the simplest way you can call the script using docker, assuming you're familiar with using -v argument to mount folders you can use this directly to specify files to enhance: Find out more about the alexjc/neural-enhance image on its Docker Hub page. Then, you should be able to download and run the pre-built image using the docker command line tool. The easiest way to get up-and-running is to install Docker. Installation & Setup 2.a) Using Docker Image # The newly trained model is output into this file.Įxample #2 - Bank Lobby: view comparison in 24-bit HD, original photo CC-BY-SA 2. generator-start=5 -discriminator-start=0 -adversarial-start=5 \ perceptual-layer=conv5_2 -smoothness-weight=2e4 -adversary-weight=1e3 \ Python3.4 enhance.py -train "data/*.jpg " -model custom -scales=2 -epochs=250 \ # Train the model using an adversarial setup based on below. generator-blocks=4 -generator-filters=64 perceptual-layer=conv2_2 -smoothness-weight=1e7 -adversary-weight=0.0 \ Python3.4 enhance.py -train "data/*.jpg " -model custom -scales=2 -epochs=50 \ # Pre-train the model using perceptual loss from paper below. # Remove the model file as don't want to reload the data to fine-tune it. 1.a) Enhancing ImagesĪ list of example command lines you can use with the pre-trained models provided in the GitHub releases: On the CPU, you can also set environment variable to OMP_NUM_THREADS=4, which is most useful when running the script multiple times in parallel. The default is to use -device=cpu, if you have NVIDIA card setup with CUDA already try -device=gpu0. Runtime depends on the neural network size.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |