OpenDevin:人人可用的开源AI软件工程师

skywalk8163 2024-06-14 13:31:02 阅读 98

OpenDevin,这是一个开源项目,旨在复制 Devin,Devin 是一位自主的 AI 软件工程师,能够执行复杂的工程任务并与用户在软件开发项目上积极协作。该项目希望通过开源社区的力量复制、增强和创新 Devin。

官网:GitHub - skywalk163/OpenDevin: 🐚 OpenDevin: Code Less, Make More

镜像:skywalk163/OpenDevin: 开源Devin - OpenDevin - OpenI - 启智AI开源社区提供普惠算力!

 

安装

需要python3.11以上版本。是目前见过的对python版本要求最高的软件。Ubuntu下安装python3.11

sudo apt install python3.11sudo apt install python3.11-venvpython3.11 -m venv py311source py311/bin/activate

 不要忘记激活py311环境。

FreeBSD下安装

git clone https://openi.pcl.ac.cn/skywalk163/OpenDevin/cd OpenDevin# 可以先尝试安装包poetry# pkg install devel/py-poetry安装需要的库pip install poetrymake build

可惜还是需要docker,在FreeBSD下运行不下去了。若是有朋友有FreeBSD下运行docker的经验,欢迎传授经验。

在Ubunut下安装:

git clone https://openi.pcl.ac.cn/skywalk163/OpenDevin/cd OpenDevin# 安装需要的库pip install poetry# 安装build时需要的库pip install pydantic-core==2.16.3pip install numpy==1.26.4 pillow==10.3.0 pandas==2.2.1 nltk==3.8.1pip install sqlalchemy==2.0.29pip install llama-index-core==0.10.27 mdurl==0.1.2 nvidia-nvjitlink-cu12==12.4.127 opentelemetry-api==1.24.0 pip install protobuf==4.25.3 pyasn1==0.6.0 pyasn1==0.6.0 pyjwt==2.8.0 setuptools==69.2.0pip install torch==2.2.2 kubernetes==29.0.0make build

若make build报错docker权限问题,执行这句:

sudo chmod 666 /var/run/docker.sock

后面make build的时候还要下载很多python库, 空间小了还装不上呢。需要的python库如下,前面安装部分已经把一些库装好了,建议尽量pip安装,会比make build时安装快很多:

[tool.poetry] name = "opendevin" version = "0.1.0" description = "OpenDevin: Code Less, Make More" authors = ["OpenDevin"] license = "MIT" readme = "README.md" repository = "https://github.com/OpenDevin/OpenDevin" [tool.poetry.dependencies] python = "^3.11" datasets = "*" pandas = "*" litellm = "*" google-generativeai = "*" # To use litellm with Gemini Pro API termcolor = "*" seaborn = "*" docker = "*" fastapi = "*" toml = "*" uvicorn = "*" types-toml = "*" numpy = "*" json-repair = "*" playwright = "*" pexpect = "*" [tool.poetry.group.llama-index.dependencies] llama-index = "*" llama-index-vector-stores-chroma = "*" chromadb = "*" llama-index-embeddings-huggingface = "*" llama-index-embeddings-azure-openai = "*" llama-index-embeddings-ollama = "*" [tool.poetry.group.dev.dependencies] ruff = "*" mypy = "*" pre-commit = "*" [tool.poetry.group.test.dependencies] pytest = "*" [tool.poetry.group.evaluation.dependencies] torch = "*" [build-system] requires = ["poetry-core"] build-backend = "poetry.core.masonry.api"

使用了docker加速

加速效果好像不行,好像加速ixaoguo也跟时间段有关系。

{ "registry-mirrors": ["https://docker.mirrors.sjtug.sjtu.edu.cn", "https://mirror.baidubce.com", "https://docker.mirrors.ustc.edu.cn", "https://dockerproxy.com", "https://docker.m.daocloud.io","https://registry.docker-cn.com", "https://hub-mirror.c.163.com", "https://mirror.baidubce.com", "https://docker.nju.edu.cn", "https://docker.mirrors.sjtug.sjtu.edu.cn"]}

更新node.js版本

若安装过程中提示node.js版本低,可以用如下命令升级:

sudo npm install n -gsudo n 18.17.1

安装完那,下面就可以开始配置了。

配置OpenDevin

使用 Makefile:轻松的方法

 只需一个命令,您就可以为您的 OpenDevin 体验进行流畅的 LM 设置。只需运行:

    make setup-config

    此命令将提示您输入 LLM API 密钥和型号名称,确保 OpenDevin 适合您的特定需求。

配置例子如下:

make setup-config

\033[0;33mSetting up config.toml...\033[0m

make[1]: Entering directory '/home/skywalk/github/OpenDevin'

Enter your LLM Model name (see https://docs.litellm.ai/docs/providers for full list) [default: gpt-3.5-turbo-1106]:

Enter your LLM API key: sss

Enter your LLM Base URL [mostly used for local LLMs, leave blank if not needed - example: http://localhost:5001/v1/]: http://127.0.0.1:1337/v1/

Enter your LLM Embedding Model\nChoices are openai, azureopenai, llama2 or leave blank to default to 'BAAI/bge-small-en-v1.5' via huggingface

> openai

Enter your workspace directory [default: ./workspace]: /media/skywalk/EXTERNAL_USB/work

make[1]: Leaving directory '/home/skywalk/github/OpenDevin'

\033[0;32mConfig.toml setup completed.\033[0m

(py311) skywalk@ub:~/github/OpenDevin$

最终LLM模型选择了自建chatgpt,api地址为:http://localhost:1337/v1/ 自建chatgpt api服务可以参考:gpt4free带来了更好的chatgpt体验!-CSDN博客

最终embedding模型选了:BAAI/bge-small-en-v1.5 。之所以选择这个模型是因为自己没有openai的账户,因此无法使用openai的embedding模型,而BAAI/bge-small-en-v1.5模型是OpenDevin默认选项,而且bge模型确实是非常出色的模型。

因为没有用openai,所以openai的key就随便写成csdn,OPenai URL写成:http://localhost:1337/v1/models/

最终配置过程如下:

make setup-config\033[0;33mSetting up config.toml...\033[0mmake[1]: Entering directory '/home/skywalk/github/OpenDevin'Enter your LLM Model name (see https://docs.litellm.ai/docs/providers for full list) [default: gpt-3.5-turbo-1106]: gpt-3.5-turbEnter your LLM API key: csdnEnter your LLM Base URL [mostly used for local LLMs, leave blank if not needed - example: http://localhost:5001/v1/]: http://localhost:1337/v1/models/Enter your LLM Embedding Model\nChoices are openai, azureopenai, llama2 or leave blank to default to 'BAAI/bge-small-en-v1.5' via huggingface> Enter your workspace directory [default: ./workspace]: /media/skywalk/EXTERNAL_USB/workmake[1]: Leaving directory '/home/skywalk/github/OpenDevin'

这样就配置好了。也可以使用手动配置。ps上面模型写错了,gpt-3.5-turbo写成gpt-3.5-turb,导致调试了很长时间。

 使用手动配置:

    如果您觉得特别喜欢冒险,可以手动更新位于项目根目录中的 config.toml 文件。在这里,您将找到 llm_api_key 和 llm_model_name 字段,您可以在其中设置您选择的 LM。

文件内容如下:

cat config.tomlLLM_MODEL="gpt-3.5-turbo"LLM_API_KEY="csdn"LLM_BASE_URL="http://localhost:1337/v1/models/"LLM_EMBEDDING_MODEL=""WORKSPACE_DIR="/media/skywalk/EXTERNAL_USB/work"

运行应用程序

运行应用程序:设置完成后,启动 OpenDevin 就像运行单个命令一样简单。此命令无缝启动后端和前端服务器,使您可以轻松地与 OpenDevin 进行交互。

    make run

也可以单独启动服务:

    启动后端服务器:

如果您愿意,可以独立启动后端服务器,以专注于与后端相关的任务或配置。

    make start-backend

   

    启动前端服务器:

同样,您可以自行启动前端服务器,以处理与前端相关的组件或接口增强功能。

    make start-frontend

启动后显示:

\033[0;34mStarting frontend with npm...\033[0m

> opendevin-frontend@0.1.0 start

> vite --port 3001

  VITE v5.2.8  ready in 2447 ms

  ➜  Local:   http://localhost:3001/

  ➜  Network: use --host to expose

  ➜  press h + enter to show help

embedding这里有问题,因为huggingface不好连,需要单独处理,处理方法见调试部分。

调试

FreeBSD下make build的时候报错

Cannot connect to the Docker daemon at unix:///var/run/docker.sock

freebsd下的docker还没调通,先停着。

make build报错permission denied while trying to connect to the Docker daemon socket

\033[0;33mPulling Docker image...\033[0m

Using default tag: latest

permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/images/create?fromImage=ghcr.io%2Fopendevin%2Fsandbox&tag=latest": dial unix /var/run/docker.sock: connect: permission denied

make[1]: *** [Makefile:78: pull-docker-image] Error 1

make: *** [Makefile:24: build] Error 2

给/var/run/docker.sock加上权限即可:

sudo chmod 666 /var/run/docker.sock

报错Node.js Required version: ^18.17.1.

\033[0;33mDetect Node.js version...\033[0m

Current Node.js version is 12.22.9, but corepack is unsupported. Required version: ^18.17.1.

升级,使用如下命令:

sudo npm install n -g

sudo n 18.17.1

启动后 Error decoding token: Not enough segments

02:35:32 - opendevin:ERROR: auth.py:18 - Error decoding token: Not enough segments

Traceback (most recent call last):

  File "/media/skywalk/EXTERNAL_USB/py311/lib/python3.11/site-packages/jwt/api_jws.py", line 257, in _load

    signing_input, crypto_segment = jwt.rsplit(b".", 1)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

ValueError: not enough values to unpack (expected 2, got 1)

准备学习deeplearning或者fastai,看看embedding怎么处理。

可以试试用huggingface的那个embedding模型。

haggingface连不上的问题

  File "/media/skywalk/EXTERNAL_USB/py311/lib/python3.11/site-packages/transformers/utils/hub.py", line 441, in cached_file

    raise EnvironmentError(

ERROR:root:<class 'OSError'>: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like BAAI/bge-small-en-v1.5 is not the path to a directory containing a file named config.json.

据说可以这样

import os

os.environ ['HF_ENDPOINT'] = 'https://hf-mirror.com'

不行,修改了/media/skywalk/EXTERNAL_USB/py311/lib/python3.11/site-packages/transformers/utils/hub.py文件里的这一句:

# CLOUDFRONT_DISTRIB_PREFIX = "https://cdn.huggingface.co"

CLOUDFRONT_DISTRIB_PREFIX = "https://hf-mirror.com"

最终是靠加了环境变量解决:

export HF_ENDPOINT=https://hf-mirror.com

启动后报错模型不存在

刚开始用了gpt-3.5-turbo-1106 ,报错后换成gpt-3.5-turbo 还是说不存在。原来g4f的api链接地址修改了,改成:http://localhost:1337/v1/models/

另外前面也有拼写错误,错把gpt-3.5-turbo写成gpt-3.5-turb ,也导致走了一些弯路。



声明

本文内容仅代表作者观点,或转载于其他网站,本站不以此文作为商业用途
如有涉及侵权,请联系本站进行删除
转载本站原创文章,请注明来源及作者。