综合 Roop是一款免费开源的AI换脸工具

2024-11-19 08:31:01 +0800 CST views 1121

一键换脸,无需数据训练,这个开源项目太惊艳了!-Roop

在科技日新月异的今天,AI 换脸技术以其独特的魅力和广泛的应用场景,成为了计算机科学领域的热点话题。本文将为大家介绍一款免费开源的 AI 换脸工具——Roop。它不仅支持图片换脸、视频换脸,还能实现直播实时换脸,为创意工作者、视频制作者及普通用户提供了前所未有的创作体验。

Roop 简介

Roop 作为 AI 换脸领域的先驱,凭借其强大的功能和广泛的用户基础,一直备受推崇。它支持图片、视频换脸,并可实现直播实时换脸,操作简单易上手。用户只需上传一张人脸图片,再选择需要替换的图片或视频,即可轻松完成换脸。Roop 还提供了预览功能,让用户可以在换脸过程中随时调整效果。此外,Roop 还支持保持帧率、跳过音频、保留临时帧等附加选项,满足用户的多样化需求。

Roop 主要功能

图片换脸

此功能允许用户将一张图片中的面孔替换为另一张图片中的面孔。操作简单,只需选择源图片和目标图片即可。

批量图片换脸

该功能支持一次性处理多张图片,将其面孔替换为指定的面孔,适合需要处理大量图片的用户。

视频换脸

用户可以选择一个视频文件和一张图片,将视频中的所有面孔替换为图片中的面孔,生成新的换脸视频。右侧功能栏囊括原程序所有功能,包括脸部防跳、多脸处理等。

直播换脸

这是 Roop 的一大亮点功能,用户可以在进行直播时实时替换面孔,让直播过程更具趣味性和互动性。

环境安装

  1. 我的部署环境是 Windows 10、CUDA 11.7、cuDNN 8.5,GPU 是 Nvidia RTX 3060 (6G 显存),加上 Anaconda3。
  2. 源码下载,如果用不了 git,可以下载打包好的源码和模型。
git clone https://github.com/s0md3v/roop.git
cd roop
  1. 创建环境
conda create --name roop python=3.10
activate roop
conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.7 -c pytorch -c nvidia
pip install -r requirements.txt
  1. 安装 onnxruntime-gpu 推理库
pip install onnxruntime-gpu
  1. 运行程序
python run.py

运行时,它会下载一个500多MB的模型。国内网络可能下载较慢,也可以单独下载后放到 Roop 根目录下。

报错处理

若出现以下报错:

ffmpeg is not installed!

这个是缺少了 FFmpeg。FFmpeg 是一套用于记录、转换数字音频、视频,并能将其转化为流的开源计算机程序。可以将其用于视频编解码、视频流转存储等功能。下载 FFmpeg 并添加到环境变量即可解决。

  1. 如果在本地跑得很慢,可以将其做成服务器运行,在网页或通过微信小程序访问。以下是服务器端代码:
#!/usr/bin/env python3
 
import os
import sys
# single thread doubles performance of gpu-mode - needs to be set before torch import
if any(arg.startswith('--gpu-vendor') for arg in sys.argv):
    os.environ['OMP_NUM_THREADS'] = '1'
import platform
import signal
import shutil
import glob
import argparse
import psutil
import torch
import tensorflow
from pathlib import Path
import multiprocessing as mp
from opennsfw2 import predict_video_frames, predict_image
from flask import Flask, request
# import base64
import numpy as np
from gevent import pywsgi
import cv2, argparse
import time
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
 
import roop.globals
from roop.swapper import process_video, process_img, process_faces, process_frames
from roop.utils import is_img, detect_fps, set_fps, create_video, add_audio, extract_frames, rreplace
from roop.analyser import get_face_single
import roop.ui as ui
 
signal.signal(signal.SIGINT, lambda signal_number, frame: quit())
parser = argparse.ArgumentParser()
parser.add_argument('-f', '--face', help='use this face', dest='source_img')
parser.add_argument('-t', '--target', help='replace this face', dest='target_path')
parser.add_argument('-o', '--output', help='save output to this file', dest='output_file')
parser.add_argument('--keep-fps', help='maintain original fps', dest='keep_fps', action='store_true', default=False)
parser.add_argument('--keep-frames', help='keep frames directory', dest='keep_frames', action='store_true', default=False)
parser.add_argument('--all-faces', help='swap all faces in frame', dest='all_faces', action='store_true', default=False)
parser.add_argument('--max-memory', help='maximum amount of RAM in GB to be used', dest='max_memory', type=int)
parser.add_argument('--cpu-cores', help='number of CPU cores to use', dest='cpu_cores', type=int, default=max(psutil.cpu_count() / 2, 1))
parser.add_argument('--gpu-threads', help='number of threads to be use for the GPU', dest='gpu_threads', type=int, default=8)
parser.add_argument('--gpu-vendor', help='choice your GPU vendor', dest='gpu_vendor', default='nvidia', choices=['apple', 'amd', 'intel', 'nvidia'])
 
args = parser.parse_known_args()[0]
 
if 'all_faces' in args:
    roop.globals.all_faces = True
 
if args.cpu_cores:
    roop.globals.cpu_cores = int(args.cpu_cores)
 
# cpu thread fix for mac
if sys.platform == 'darwin':
    roop.globals.cpu_cores = 1
 
if args.gpu_threads:
    roop.globals.gpu_threads = int(args.gpu_threads)
 
# gpu thread fix for amd
if args.gpu_vendor == 'amd':
    roop.globals.gpu_threads = 1
 
if args.gpu_vendor:
    roop.globals.gpu_vendor = args.gpu_vendor
else:
    roop.globals.providers = ['CPUExecutionProvider']
 
sep = "/"
if os.name == "nt":
    sep = "\\"
 
 
def limit_resources():
    # prevent tensorflow memory leak
    gpus = tensorflow.config.experimental.list_physical_devices('GPU')
    for gpu in gpus:
        tensorflow.config.experimental.set_memory_growth(gpu, True)
    if args.max_memory:
        memory = args.max_memory * 1024 * 1024 * 1024
        if str(platform.system()).lower() == 'windows':
            import ctypes
            kernel32 = ctypes.windll.kernel32
            kernel32.SetProcessWorkingSetSize(-1, ctypes.c_size_t(memory), ctypes.c_size_t(memory))
        else:
            import resource
            resource.setrlimit(resource.RLIMIT_DATA, (memory, memory))
 
 
def pre_check():
    if sys.version_info < (3, 9):
        quit('Python version is not supported - please upgrade to 3.9 or higher')
    if not shutil.which('ffmpeg'):
        quit('ffmpeg is not installed!')
    model_path = os.path.join(os.path.abspath(os.path.dirname(__file__)), '../inswapper_128.onnx')
    if not os.path.isfile(model_path):
        quit('File "inswapper_128.onnx" does not exist!')
    if roop.globals.gpu_vendor == 'apple':
        if 'CoreMLExecutionProvider' not in roop.globals.providers:
            quit("You are using --gpu=apple flag but CoreML isn't available or properly installed on your system.")
    if roop.globals.gpu_vendor == 'amd':
        if 'ROCMExecutionProvider' not in roop.globals.providers:
            quit("You are using --gpu=amd flag but ROCM isn't available or properly installed on your system.")
    if roop.globals.gpu_vendor == 'nvidia':
        CUDA_VERSION = torch.version.cuda
        CUDNN_VERSION = torch.backends.cudnn.version()
        if not torch.cuda.is_available():
            quit("You are using --gpu=nvidia flag but CUDA isn't available or properly installed on your system.")
        if CUDA_VERSION > '11.8':
            quit(f"CUDA version {CUDA_VERSION} is not supported - please downgrade to 11.8")
        if CUDA_VERSION < '11.4':
            quit(f"CUDA version {CUDA_VERSION} is not supported - please upgrade to 11.8")
        if CUDNN_VERSION < 8220:
            quit(f"CUDNN version {CUDNN_VERSION} is not supported - please upgrade to 8.9.1")
        if CUDNN_VERSION > 8910:
            quit(f"CUDNN version {CUDNN_VERSION} is not supported - please downgrade to 8.9.1")
 
 
def get_video_frame(video_path, frame_number = 1):
    cap = cv2.VideoCapture(video_path)
    amount_of_frames = cap.get(cv2.CAP_PROP_FRAME_COUNT)
    cap.set(cv2.CAP_PROP_POS_FRAMES, min(amount_of_frames, frame_number-1))
    if not cap.isOpened():
        print("Error opening video file")
        return
    ret, frame = cap.read()
    if ret:
        return cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
 
    cap.release()
 
 
def preview_video(video_path):
    cap = cv2.VideoCapture(video_path)
    if not cap.isOpened():
        print("Error opening video file")
        return 0
    amount_of_frames = cap.get(cv2.CAP_PROP_FRAME_COUNT)
    ret, frame = cap.read()
    if ret:
        frame = get_video_frame(video_path)
 
    cap.release()
    return (amount_of_frames, frame)
 
 
def status(string):
    value = "Status: " + string
    if 'cli_mode' in args:
        print(value)
    else:
        ui.update_status_label(value)
 
 
def process_video_multi_cores(source_img, frame_paths):
    n = len(frame_paths) // roop.globals.cpu_cores
    if n > 2:
        processes = []
        for i in range(0, len(frame_paths), n):
            p = POOL.apply_async(process_video, args=(source_img, frame_paths[i:i + n],))
            processes.append(p)
        for p in processes:
            p.get()
        POOL.close()
        POOL.join()
 
 
 
def select_face_handler(path: str):
    args.source_img = path
 
 
def select_target_handler(path: str):
    args.target_path = path
    return preview_video(args.target_path)
 
 
def toggle_all_faces_handler(value: int):
    roop.globals.all_faces = True if value == 1 else False
 
 
def toggle_fps_limit_handler(value: int):
    args.keep_fps = int(value != 1)
 
 
def toggle_keep_frames_handler(value: int):
    args.keep_frames = value
 
 
def save_file_handler(path: str):
    args.output_file = path
 
 
def create_test_preview(frame_number):
    return process_faces(
        get_face_single(cv2.imread(args.source_img)),
        get_video_frame(args.target_path, frame_number)
    )
 
 
app = Flask(__name__)
@app.route('/face_swap', methods=['POST'])
def face_swap():
    if request.method == 'POST':
        args.source_img=request.form.get('source_img')
        args.target_path = request.form.get('target_path')
        args.output_file = request.form.get('output_path')
        keep_fps = request.form.get('keep_fps')
        if keep_fps == '0':
            args.keep_fps = False
        else:
            args.keep_fps = True
        
        Keep_frames = request.form.get('Keep_frames')
        if Keep_frames == '0':
            args.Keep_frames = False
        else:
            args.Keep_frames = True
 
        all_faces = request.form.get('all_faces')
        if all_faces == '0':
            args.all_faces = False
        else:
            args.all_faces = True
 
    if not args.source_img or not os.path.isfile(args.source_img):
        print("\n[WARNING] Please select an image containing a face.")
        return
    elif not args.target_path or not os.path.isfile(args.target_path):
        print("\n[WARNING] Please select a video/image to swap face in.")
        return
    if not args.output_file:
        target_path = args.target_path
        args.output_file = rreplace(target_path, "/", "/swapped-", 1) if "/" in target_path else "swapped-" + target_path
    target_path = args.target_path
    test_face = get_face_single(cv2.imread(args.source_img))
    if not test_face:
        print("\n[WARNING] No face detected in source image. Please try with another one.\n")
        return
    if is_img(target_path):
        if predict_image(target_path) > 0.85:
            quit()
        process_img(args.source_img, target_path, args.output_file)
        # status("swap successful!")
        return 'ok'
    
    seconds, probabilities = predict_video_frames(video_path=args.target_path, frame_interval=100)
    if any(probability > 0.85 for probability in probabilities):
        quit()
    video_name_full = target_path.split("/")[-1]
    video_name = os.path.splitext(video_name_full)[0]
    output_dir = os.path.dirname(target_path) + "/" + video_name if os.path.dirname(target_path) else video_name
    Path(output_dir).mkdir(exist_ok=True)
    # status("detecting video's FPS...")
    fps, exact_fps = detect_fps(target_path)
    
    if not args.keep_fps and fps > 30:
        this_path = output_dir + "/" + video_name + ".mp4"
        set_fps(target_path, this_path, 30)
        target_path, exact_fps = this_path, 30
    else:
        shutil.copy(target_path, output_dir)
    # status("extracting frames...")
    extract_frames(target_path, output_dir)
 
    args.frame_paths = tuple(sorted(
        glob.glob(output_dir + "/*.png"),
        key=lambda x: int(x.split(sep)[-1].replace(".png", ""))
    ))
 
    # status("swapping in progress...")
    if roop.globals.gpu_vendor is None and roop.globals.cpu_cores > 1:
        global POOL
        POOL = mp.Pool(roop.globals.cpu_cores)
        process_video_multi_cores(args.source_img, args.frame_paths)
    else:
        process_video(args.source_img, args.frame_paths)
    # status("creating video...")
    create_video(video_name, exact_fps, output_dir)
    # status("adding audio...")
    add_audio(output_dir, target_path, video_name_full, args.keep_frames, args.output_file)
    save_path = args.output_file if args.output_file else output_dir + "/" + video_name + ".mp4"
    print("\n\nVideo saved as:", save_path, "\n\n")
    # status("swap successful!")
 
    return 'ok'
 
if __name__ == "__main__":
    print('Statrt server----------------')
    server = pywsgi.WSGIServer(('127.0.0.1', 5020), app)
    server.serve_forever()
  1. 客户端代码示例:
import requests

source_img = "z1.jpg"
target_path = "z2.mp4"
output_path = "zface2.mp4"
keep_fps = '0'
Keep_frames = '0'
all_faces = '0'

data = {'source_img': source_img,'target_path' : target_path,'output_path':output_path,
        'keep-fps' : keep_fps,'Keep_frames':Keep_frames,'all_faces':all_faces}

resp = requests.post("http://127.0.0.1:5020/face_swap", data=data)
print(resp.content)

总结

Roop 插件作为 AI 换脸工具,具有强大的换脸功能和高清人脸生成等特性。通过本文的详细解读,相信读者已经对 Roop 插件的使用方法和技巧有了深入的了解。希望读者能够在实践中不断探索和尝试,创造出更加出色的 AI 换脸作品。

开源地址:https://github.com/s0md3v/roop

images

推荐文章

网站日志分析脚本
2024-11-19 03:48:35 +0800 CST
CSS 奇技淫巧
2024-11-19 08:34:21 +0800 CST
Vue中的异步更新是如何实现的?
2024-11-18 19:24:29 +0800 CST
MySQL 优化利剑 EXPLAIN
2024-11-19 00:43:21 +0800 CST
MySQL用命令行复制表的方法
2024-11-17 05:03:46 +0800 CST
mendeley2 一个Python管理文献的库
2024-11-19 02:56:20 +0800 CST
mysql int bigint 自增索引范围
2024-11-18 07:29:12 +0800 CST
JavaScript设计模式:单例模式
2024-11-18 10:57:41 +0800 CST
JavaScript 的模板字符串
2024-11-18 22:44:09 +0800 CST
php 连接mssql数据库
2024-11-17 05:01:41 +0800 CST
介绍Vue3的静态提升是什么?
2024-11-18 10:25:10 +0800 CST
Vue3中的v-for指令有什么新特性?
2024-11-18 12:34:09 +0800 CST
CSS 特效与资源推荐
2024-11-19 00:43:31 +0800 CST
Flet 构建跨平台应用的 Python 框架
2025-03-21 08:40:53 +0800 CST
程序员茄子在线接单