【复现】vid2vid_zero
问题及解决方法总结。
code:GitHub - baaivision/vid2vid-zero: Zero-Shot Video Editing Using Off-The-Shelf Image Diffusion Models
1.AttributeError: 'UNet2DConditionModel' object has no attribute 'encoder'
据说是预训练模型结构不匹配,偷懒把animatediff用的sd-v1-5搬过来果然不行。。老实下载sd-v1-4去了
网址:https://huggingface.co/CompVis/stable-diffusion-v1-4/tree/main
漫长的下载 x N
2.HFValidationError
File "/opt/conda/envs/vid2vid/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 158, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/data/vid2vid-zero/checkpoints/stable-diffusion-v1-4'. Use `repo_type` argument if needed.
本来以为是文件路径写的不对,但检查了好几遍可以排除这个原因,查找报错的文件:
File "/opt/conda/envs/vid2vid/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 158, in validate_repo_id
找到validators.py 文件 line 158:
if repo_id.count("/") > 1:
raise HFValidationError(
"Repo id must be in the form 'repo_name' or 'namespace/repo_name':"
f" '{repo_id}'. Use `repo_type` argument if needed."
)
就是说Path to off-the-shelf model输入里的“/”大于1个就会报错
而且每次点start,控制台都会运行:
https://huggingface.co/xxx
?xxx是输入的Path to off-the-shelf model,比如这里就是
https://huggingface.co//data/vid2vid-zero/checkpoints/stable-diffusion-v1-4
可以看到是个错误的路径,代码里就要求输入能链接到Hugging Face Hub上的模型的格式,比如示例里给的输入是CompVis/stable-diffusion-v1-4,就会链接到在线模型:
https://huggingface.co/CompVis/stable-diffusion-v1-4
必须要改代码,让程序先查找本地模型,而不是直接去Hugging Face Hub在线用模型(因为会ConnectTimeoutError)
瞎改一通,runner.py里的download_base_model方法:
def download_base_model(self, base_model_id: str, token=None) -> str:
# 设置模型文件的路径
model_dir = self.checkpoint_dir / base_model_id
org_name = base_model_id.split('/')[0]
org_dir = self.checkpoint_dir / org_name
# 如果模型文件不存在,则创建一个空目录
if not model_dir.exists():
org_dir.mkdir(exist_ok=True)
# 打印模型在Hugging Face Hub上的链接
print(f'https://huggingface.co/{base_model_id}')
print(token)
print(org_dir)
# 如果没有提供token,则使用Git Large File Storage (LFS)克隆模型
if token == None:
subprocess.run(shlex.split(f'git lfs install'), cwd=org_dir)
subprocess.run(shlex.split(
f'git lfs clone https://huggingface.co/{base_model_id}'),
cwd=org_dir)
return model_dir.as_posix()
# 否则,使用Hugging Face Hub下载模型快照到临时路径,并返回临时路径
else:
temp_path = huggingface_hub.snapshot_download(base_model_id, use_auth_token=token)
print(temp_path, org_dir)
# 移动临时路径中的模型文件到目标路径
# subprocess.run(shlex.split(f'mv {temp_path} {model_dir.as_posix()}'))
# return model_dir.as_posix()
return temp_path
改为:?
class Runner:
def __init__(self, hf_token: str | None = None):
self.hf_token = hf_token
self.checkpoint_dir = pathlib.Path('checkpoints')
self.checkpoint_dir.mkdir(exist_ok=True)
def download_base_model(self, base_model_id: str, token=None) -> str:
model_dir = self.checkpoint_dir / base_model_id
org_name = base_model_id.split('/')[0]
org_dir = self.checkpoint_dir / org_name
if not model_dir.exists():
org_dir.mkdir(exist_ok=True)
# 加载本地模型文件的代码
local_model_path = '/data/vid2vid-zero/checkpoints/stable-diffusion-v1-4'
return local_model_path
app.py也要改一点
这个问题姑且是解决了
3.?FileNotFoundError: [Errno 2] No such file or directory: '...'
不出意外,出现了新的问题
video path for gradio: /data/vid2vid-zero/gradio_demo/outputs/A_car_is_moving_on_the_road./test.mp4
Running completed!
Traceback (most recent call last):
File "/opt/conda/envs/vid2vid/lib/python3.10/site-packages/gradio/queueing.py", line 407, in call_prediction
output = await route_utils.call_process_api(
File "/opt/conda/envs/vid2vid/lib/python3.10/site-packages/gradio/route_utils.py", line 226, in call_process_api
output = await app.get_blocks().process_api(
File "/opt/conda/envs/vid2vid/lib/python3.10/site-packages/gradio/blocks.py", line 1559, in process_api
data = self.postprocess_data(fn_index, result["prediction"], state)
File "/opt/conda/envs/vid2vid/lib/python3.10/site-packages/gradio/blocks.py", line 1447, in postprocess_data
prediction_value = block.postprocess(prediction_value)
File "/opt/conda/envs/vid2vid/lib/python3.10/site-packages/gradio/components/video.py", line 273, in postprocess
processed_files = (self._format_video(y), None)
File "/opt/conda/envs/vid2vid/lib/python3.10/site-packages/gradio/components/video.py", line 350, in _format_video
video = self.make_temp_copy_if_needed(video)
File "/opt/conda/envs/vid2vid/lib/python3.10/site-packages/gradio/components/base.py", line 233, in make_temp_copy_if_needed
temp_dir = self.hash_file(file_path)
File "/opt/conda/envs/vid2vid/lib/python3.10/site-packages/gradio/components/base.py", line 197, in hash_file
with open(file_path, "rb") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/data/vid2vid-zero/gradio_demo/outputs/A_car_is_moving_on_the_road./test.mp4'
输出的路径确实没有test.mp4文件,为什么呢,都Running completed!了。。
3. 缺少xformers
conda install xformers -c xformers
失败:默认下载最新版xformers=0.0.23,Requires?? PyTorch 2.1.1,而我的配置:
环境:pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.6
报了一堆不兼容的错
尝试2:参考教程Linux安装xFormers教程-CSDN博客
解决啦
其实还有一个问题是支持xformer的GPU最低算力是(7,0),但我的GPU是(6,1),不知道有啥问题,不过暂时还没报错,先不管了
4.OSError: Unable to load weights from checkpoint file
OSError: Unable to load weights from checkpoint file for '/data/vid2vid-zero/checkpoints/stable-diffusion-v1-4/unet/diffusion_pytorch_model.bin' at '/data/vid2vid-zero/checkpoints/stable-diffusion-v1-4/unet/diffusion_pytorch_model.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
原因:找不到文件,或者文件损坏
看了一下发现是copy到一半和服务器断联了,所以bin文件只上传了一半,删掉再重新上传
4.?AttributeError: 'NoneType' object has no attribute 'eval'
Traceback (most recent call last):
File "/data/vid2vid-zero/test_vid2vid_zero.py", line 269, in <module>
main(**OmegaConf.load(args.config))
File "/data/vid2vid-zero/test_vid2vid_zero.py", line 200, in main
unet.eval()
AttributeError: 'NoneType' object has no attribute 'eval'
Traceback (most recent call last):
File "/opt/conda/envs/vid2vid/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/opt/conda/envs/vid2vid/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 47, in main
args.func(args)
File "/opt/conda/envs/vid2vid/lib/python3.10/site-packages/accelerate/commands/launch.py", line 994, in launch_command
simple_launcher(args)
File "/opt/conda/envs/vid2vid/lib/python3.10/site-packages/accelerate/commands/launch.py", line 636, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/opt/conda/envs/vid2vid/bin/python3.1', 'test_vid2vid_zero.py', '--config', 'configs/car-moving.yaml']' returned non-zero exit status 1.
调试了一下,发现unet的type输出为None
突然发现animatediff和vid2vid的代码都是建立在tune-a-video的基础上的,之后可以去看下它的代码
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。 如若内容造成侵权/违法违规/事实不符,请联系我的编程经验分享网邮箱:veading@qq.com进行投诉反馈,一经查实,立即删除!