AI项目二十:基于YOLOv8实例分割的DeepSORT多目标跟踪

若该文为原创文章,转载请注明原文出处。

前面提及目标跟踪使用的方法有很多,更多的是Deepsort方法。

本篇博客记录YOLOv8的实例分割+deepsort视觉跟踪算法。结合YOLOv8的目标检测分割和deepsort的特征跟踪,该算法在复杂环境下确保了目标的准确与稳定跟踪。在计算机视觉中,这种跟踪技术在安全监控、无人驾驶等领域有着广泛应用。

源码地址:GitHub - MuhammadMoinFaisal/YOLOv8_Segmentation_DeepSORT_Object_Tracking: YOLOv8 Segmentation with DeepSORT Object Tracking (ID + Trails)

感谢Muhammad Moin

一、环境搭建教程

使用的是Anaconda3,环境自行安装,可以参考前面的文章搭建。

1、创建虚拟环境

conda create -n YOLOv8-Seg-Deepsort python=3.8

2、激活

conda activate YOLOv8-Seg-Deepsort

二、下载代码

代码可以使用源码,也可以使用我的,我把YOLOv8_Segmentation_DeepSORT_Object_Tracking和YOLOv8-DeepSORT-Object-Tracking整合在一起了。

下载地址:

Yinyifeng18/YOLOv8_Segmentation_DeepSORT_Object_Tracking (github.com)

git clone https://github.com/Yinyifeng18/YOLOv8_Segmentation_DeepSORT_Object_Tracking.git

三、、安装依赖项

pip install -e ".[dev]"

如果使用的是源码,会出现下面错误:

AttributeError: module 'numpy' has no attribute 'float'
 
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

出错错误的原因是所用的代码是依赖于旧版本的Numpy。您可以将你的Numpy版本降级到1.23.5。

pip install numpy==1.23.5

四、测试

1、转到检测或分割目录下

cd YOLOv8_Segmentation_DeepSORT_Object_Tracking\ultralytics\yolo\v8\detect

cd YOLOv8_Segmentation_DeepSORT_Object_Tracking\ultralytics\yolo\v8\segment

2、测试

python predict.py model=yolov8l.pt source="test3.mp4" show=True

python predict.py model=yolov8x-seg.pt source="test3.mp4" show=True

使用是实例分割测试,运行结果。

如果想保存视频,直接参数save=True

五、代码説明

DeepSort需要DeepSORT 文件,下载地址是:


https://drive.google.com/drive/folders/1kna8eWGrSfzaR6DtNJ8_GchGgPMv3VC8?usp=sharing
  • 下载DeepSORT Zip文件后,将其解压缩到子文件夹中,然后将deep_sort_pytorch文件夹放入ultralytics/yolo/v8/segment文件夹中

  • 目录结果如下

这里直接附predict.py代码

# Ultralytics YOLO 🚀, GPL-3.0 license

import hydra
import torch

from ultralytics.yolo.utils import DEFAULT_CONFIG, ROOT, ops
from ultralytics.yolo.utils.checks import check_imgsz
from ultralytics.yolo.utils.plotting import colors, save_one_box

from ultralytics.yolo.v8.detect.predict import DetectionPredictor
from numpy import random


import cv2
from deep_sort_pytorch.utils.parser import get_config
from deep_sort_pytorch.deep_sort import DeepSort
#Deque is basically a double ended queue in python, we prefer deque over list when we need to perform insertion or pop up operations
#at the same time
from collections import deque
import numpy as np
palette = (2 ** 11 - 1, 2 ** 15 - 1, 2 ** 20 - 1)
data_deque = {}

deepsort = None

object_counter = {}

object_counter1 = {}

line = [(100, 500), (1050, 500)]
def init_tracker():
    global deepsort
    cfg_deep = get_config()
    cfg_deep.merge_from_file("deep_sort_pytorch/configs/deep_sort.yaml")

    deepsort= DeepSort(cfg_deep.DEEPSORT.REID_CKPT,
                            max_dist=cfg_deep.DEEPSORT.MAX_DIST, min_confidence=cfg_deep.DEEPSORT.MIN_CONFIDENCE,
                            nms_max_overlap=cfg_deep.DEEPSORT.NMS_MAX_OVERLAP, max_iou_distance=cfg_deep.DEEPSORT.MAX_IOU_DISTANCE,
                            max_age=cfg_deep.DEEPSORT.MAX_AGE, n_init=cfg_deep.DEEPSORT.N_INIT, nn_budget=cfg_deep.DEEPSORT.NN_BUDGET,
                            use_cuda=True)
##########################################################################################
def xyxy_to_xywh(*xyxy):
    """" Calculates the relative bounding box from absolute pixel values. """
    bbox_left = min([xyxy[0].item(), xyxy[2].item()])
    bbox_top = min([xyxy[1].item(), xyxy[3].item()])
    bbox_w = abs(xyxy[0].item() - xyxy[2].item())
    bbox_h = abs(xyxy[1].item() - xyxy[3].item())
    x_c = (bbox_left + bbox_w / 2)
    y_c = (bbox_top + bbox_h / 2)
    w = bbox_w
    h = bbox_h
    return x_c, y_c, w, h

def xyxy_to_tlwh(bbox_xyxy):
    tlwh_bboxs = []
    for i, box in enumerate(bbox_xyxy):
        x1, y1, x2, y2 = [int(i) for i in box]
        top = x1
        left = y1
        w = int(x2 - x1)
        h = int(y2 - y1)
        tlwh_obj = [top, left, w, h]
        tlwh_bboxs.append(tlwh_obj)
    return tlwh_bboxs

def compute_color_for_labels(label):
    """
    Simple function that adds fixed color depending on the class
    """
    if label == 0: #person
        color = (85,45,255)
    elif label == 2: # Car
        color = (222,82,175)
    elif label == 3:  # Motobike
        color = (0, 204, 255)
    elif label == 5:  # Bus
        color = (0, 149, 255)
    else:
        color = [int((p * (label ** 2 - label + 1)) % 255) for p in palette]
    return tuple(color)

def draw_border(img, pt1, pt2, color, thickness, r, d):
    x1,y1 = pt1
    x2,y2 = pt2
    # Top left
    cv2.line(img, (x1 + r, y1), (x1 + r + d, y1), color, thickness)
    cv2.line(img, (x1, y1 + r), (x1, y1 + r + d), color, thickness)
    cv2.ellipse(img, (x1 + r, y1 + r), (r, r), 180, 0, 90, color, thickness)
    # Top right
    cv2.line(img, (x2 - r, y1), (x2 - r - d, y1), color, thickness)
    cv2.line(img, (x2, y1 + r), (x2, y1 + r + d), color, thickness)
    cv2.ellipse(img, (x2 - r, y1 + r), (r, r), 270, 0, 90, color, thickness)
    # Bottom left
    cv2.line(img, (x1 + r, y2), (x1 + r + d, y2), color, thickness)
    cv2.line(img, (x1, y2 - r), (x1, y2 - r - d), color, thickness)
    cv2.ellipse(img, (x1 + r, y2 - r), (r, r), 90, 0, 90, color, thickness)
    # Bottom right
    cv2.line(img, (x2 - r, y2), (x2 - r - d, y2), color, thickness)
    cv2.line(img, (x2, y2 - r), (x2, y2 - r - d), color, thickness)
    cv2.ellipse(img, (x2 - r, y2 - r), (r, r), 0, 0, 90, color, thickness)

    cv2.rectangle(img, (x1 + r, y1), (x2 - r, y2), color, -1, cv2.LINE_AA)
    cv2.rectangle(img, (x1, y1 + r), (x2, y2 - r - d), color, -1, cv2.LINE_AA)
    
    cv2.circle(img, (x1 +r, y1+r), 2, color, 12)
    cv2.circle(img, (x2 -r, y1+r), 2, color, 12)
    cv2.circle(img, (x1 +r, y2-r), 2, color, 12)
    cv2.circle(img, (x2 -r, y2-r), 2, color, 12)
    
    return img

def UI_box(x, img, color=None, label=None, line_thickness=None):
    # Plots one bounding box on image img
    tl = line_thickness or round(0.002 * (img.shape[0] + img.shape[1]) / 2) + 1  # line/font thickness
    color = color or [random.randint(0, 255) for _ in range(3)]
    c1, c2 = (int(x[0]), int(x[1])), (int(x[2]), int(x[3]))
    cv2.rectangle(img, c1, c2, color, thickness=tl, lineType=cv2.LINE_AA)
    if label:
        tf = max(tl - 1, 1)  # font thickness
        t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0]

        img = draw_border(img, (c1[0], c1[1] - t_size[1] -3), (c1[0] + t_size[0], c1[1]+3), color, 1, 8, 2)

        cv2.putText(img, label, (c1[0], c1[1] - 2), 0, tl / 3, [225, 255, 255], thickness=tf, lineType=cv2.LINE_AA)


def intersect(A,B,C,D):
    return ccw(A,C,D) != ccw(B,C,D) and ccw(A,B,C) != ccw(A,B,D)

def ccw(A,B,C):
    return (C[1]-A[1]) * (B[0]-A[0]) > (B[1]-A[1]) * (C[0]-A[0])


def get_direction(point1, point2):
    direction_str = ""

    # calculate y axis direction
    if point1[1] > point2[1]:
        direction_str += "South"
    elif point1[1] < point2[1]:
        direction_str += "North"
    else:
        direction_str += ""

    # calculate x axis direction
    if point1[0] > point2[0]:
        direction_str += "East"
    elif point1[0] < point2[0]:
        direction_str += "West"
    else:
        direction_str += ""

    return direction_str
def draw_boxes(img, bbox, names,object_id, identities=None, offset=(0, 0)):
    cv2.line(img, line[0], line[1], (46,162,112), 3)

    height, width, _ = img.shape
    # remove tracked point from buffer if object is lost
    for key in list(data_deque):
      if key not in identities:
        data_deque.pop(key)

    for i, box in enumerate(bbox):
        x1, y1, x2, y2 = [int(i) for i in box]
        x1 += offset[0]
        x2 += offset[0]
        y1 += offset[1]
        y2 += offset[1]

        # code to find center of bottom edge
        center = (int((x2+x1)/ 2), int((y2+y2)/2))

        # get ID of object
        id = int(identities[i]) if identities is not None else 0

        # create new buffer for new object
        if id not in data_deque:  
          data_deque[id] = deque(maxlen= 64)
        color = compute_color_for_labels(object_id[i])
        obj_name = names[object_id[i]]
        label = '{}{:d}'.format("", id) + ":"+ '%s' % (obj_name)

        # add center to buffer
        data_deque[id].appendleft(center)
        if len(data_deque[id]) >= 2:
          direction = get_direction(data_deque[id][0], data_deque[id][1])
          if intersect(data_deque[id][0], data_deque[id][1], line[0], line[1]):
              cv2.line(img, line[0], line[1], (255, 255, 255), 3)
              if "South" in direction:
                if obj_name not in object_counter:
                    object_counter[obj_name] = 1
                else:
                    object_counter[obj_name] += 1
              if "North" in direction:
                if obj_name not in object_counter1:
                    object_counter1[obj_name] = 1
                else:
                    object_counter1[obj_name] += 1
        UI_box(box, img, label=label, color=color, line_thickness=2)
        # draw trail
        for i in range(1, len(data_deque[id])):
            # check if on buffer value is none
            if data_deque[id][i - 1] is None or data_deque[id][i] is None:
                continue
            # generate dynamic thickness of trails
            thickness = int(np.sqrt(64 / float(i + i)) * 1.5)
            # draw trails
            cv2.line(img, data_deque[id][i - 1], data_deque[id][i], color, thickness)
    
    #4. Display Count in top right corner
        for idx, (key, value) in enumerate(object_counter1.items()):
            cnt_str = str(key) + ":" +str(value)
            cv2.line(img, (width - 500,25), (width,25), [85,45,255], 40)
            cv2.putText(img, f'Number of Vehicles Entering', (width - 500, 35), 0, 1, [225, 255, 255], thickness=2, lineType=cv2.LINE_AA)
            cv2.line(img, (width - 150, 65 + (idx*40)), (width, 65 + (idx*40)), [85, 45, 255], 30)
            cv2.putText(img, cnt_str, (width - 150, 75 + (idx*40)), 0, 1, [255, 255, 255], thickness = 2, lineType = cv2.LINE_AA)

        for idx, (key, value) in enumerate(object_counter.items()):
            cnt_str1 = str(key) + ":" +str(value)
            cv2.line(img, (20,25), (500,25), [85,45,255], 40)
            cv2.putText(img, f'Numbers of Vehicles Leaving', (11, 35), 0, 1, [225, 255, 255], thickness=2, lineType=cv2.LINE_AA)    
            cv2.line(img, (20,65+ (idx*40)), (127,65+ (idx*40)), [85,45,255], 30)
            cv2.putText(img, cnt_str1, (11, 75+ (idx*40)), 0, 1, [225, 255, 255], thickness=2, lineType=cv2.LINE_AA)
    
    
    
    return img


class SegmentationPredictor(DetectionPredictor):

    def postprocess(self, preds, img, orig_img):
        masks = []
        # TODO: filter by classes
        p = ops.non_max_suppression(preds[0],
                                    self.args.conf,
                                    self.args.iou,
                                    agnostic=self.args.agnostic_nms,
                                    max_det=self.args.max_det,
                                    nm=32)
        proto = preds[1][-1]
        for i, pred in enumerate(p):
            shape = orig_img[i].shape if self.webcam else orig_img.shape
            if not len(pred):
                continue
            if self.args.retina_masks:
                pred[:, :4] = ops.scale_boxes(img.shape[2:], pred[:, :4], shape).round()
                masks.append(ops.process_mask_native(proto[i], pred[:, 6:], pred[:, :4], shape[:2]))  # HWC
            else:
                masks.append(ops.process_mask(proto[i], pred[:, 6:], pred[:, :4], img.shape[2:], upsample=True))  # HWC
                pred[:, :4] = ops.scale_boxes(img.shape[2:], pred[:, :4], shape).round()

        return (p, masks)

    def write_results(self, idx, preds, batch):
        p, im, im0 = batch
        log_string = ""
        if len(im.shape) == 3:
            im = im[None]  # expand for batch dim
        self.seen += 1
        if self.webcam:  # batch_size >= 1
            log_string += f'{idx}: '
            frame = self.dataset.count
        else:
            frame = getattr(self.dataset, 'frame', 0)

        self.data_path = p
        self.txt_path = str(self.save_dir / 'labels' / p.stem) + ('' if self.dataset.mode == 'image' else f'_{frame}')
        log_string += '%gx%g ' % im.shape[2:]  # print string
        self.annotator = self.get_annotator(im0)

        preds, masks = preds
        det = preds[idx]
        if len(det) == 0:
            return log_string
        # Segments
        mask = masks[idx]
        if self.args.save_txt:
            segments = [
                ops.scale_segments(im0.shape if self.args.retina_masks else im.shape[2:], x, im0.shape, normalize=True)
                for x in reversed(ops.masks2segments(mask))]

        # Print results
        for c in det[:, 5].unique():
            n = (det[:, 5] == c).sum()  # detections per class
            log_string += f"{n} {self.model.names[int(c)]}{'s' * (n > 1)}, "  # add to string

        # Mask plotting
        self.annotator.masks(
            mask,
            colors=[colors(x, True) for x in det[:, 5]],
            im_gpu=torch.as_tensor(im0, dtype=torch.float16).to(self.device).permute(2, 0, 1).flip(0).contiguous() /
            255 if self.args.retina_masks else im[idx])

        det = reversed(det[:, :6])
        self.all_outputs.append([det, mask])
        xywh_bboxs = []
        confs = []
        oids = []
        outputs = []
        # Write results
        for j, (*xyxy, conf, cls) in enumerate(reversed(det[:, :6])):
            x_c, y_c, bbox_w, bbox_h = xyxy_to_xywh(*xyxy)
            xywh_obj = [x_c, y_c, bbox_w, bbox_h]
            xywh_bboxs.append(xywh_obj)
            confs.append([conf.item()])
            oids.append(int(cls))
        xywhs = torch.Tensor(xywh_bboxs)
        confss = torch.Tensor(confs)
          
        outputs = deepsort.update(xywhs, confss, oids, im0)
        if len(outputs) > 0:
            bbox_xyxy = outputs[:, :4]
            identities = outputs[:, -2]
            object_id = outputs[:, -1]
            
            draw_boxes(im0, bbox_xyxy, self.model.names, object_id,identities)
        return log_string


@hydra.main(version_base=None, config_path=str(DEFAULT_CONFIG.parent), config_name=DEFAULT_CONFIG.name)
def predict(cfg):
    init_tracker()
    cfg.model = cfg.model or "yolov8n-seg.pt"
    cfg.imgsz = check_imgsz(cfg.imgsz, min_dim=2)  # check image size
    cfg.source = cfg.source if cfg.source is not None else ROOT / "assets"

    predictor = SegmentationPredictor(cfg)
    predictor()


if __name__ == "__main__":
    predict()

这里给的是对象分割和 DeepSORT 跟踪(ID + 轨迹)和车辆计数

没有分割在detect目录下,自行测试。

测试结果

如有侵权,或需要完整代码,请及时联系博主。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mfbz.cn/a/582128.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

R语言的基本图形

一&#xff0c;条形图 安装包 install.packages("vcd") 绘制简单的条形图 barplot(c(1,2,4,5,6,3)) 水平条形图 barplot(c(1,2,4,5,6,3),horiz TRUE) 堆砌条形图 > d1<-c("Placebo","Treated") > d2<-c("None",&qu…

linux运行python怎么结束

假如你已经进入到【>>>】&#xff0c;那么输入【quit&#xff08;&#xff09;】&#xff0c;然后按一下回车键即可退出了。 如果是想要关闭窗口的&#xff0c;那么直接在这个窗口上按【ctrld】。

vue2集成ElementUI编写登录页面

目录 1. 整理目录文件&#xff1a; a. app.vue文件如下&#xff1a; b. Login.vue文件如下&#xff1a; c. router/index.js文件如下&#xff1a; d. 删除components中的文件&#xff1a; e. 最终项目目录整理如下&#xff1a; 2. 集成ElementUI编写登录页面 a. 安装El…

Vue3 v3.4之前如何实现组件中多个值的双向绑定?

文章目录 基础代码1. watch2. computed&#xff08;推荐&#xff09; 官方给的例子是关于el-input的&#xff0c;如下。但是input不是所有组件标签都有的属性啊&#xff0c;有没有一种通用的办法呢&#xff1f; <script setup> defineProps({firstName: String,lastName…

Docker容器:搭建LNMP架构

目录 前言 1、任务要求 2、Nginx 镜像创建 2.1 建立工作目录并上传相关安装包 2.2 编写 Nginx Dockerfile 脚本 2.3 准备 nginx.conf 配置文件 2.4 生成镜像 2.5 创建 Nginx 镜像的容器 2.6 验证nginx 3、Mysql 镜像创建 3.1 建立工作目录并上传相关安装包 3.2 编写…

FANUC机器人SOCKET断开KAREL程序编写

一、添加一个.KL文件创建编辑断开指令 添加一个KL文件用来创建karel程序中socket断开指令 二、断开连接程序karel代码 PROGRAM SOC_DIS %COMMENT SOCKET断开 %INCLUDE klevccdf VAR str_input,str_val : STRING[20] status,data_type,int_val : INTEGER rel_val : REALBEGING…

【Linux】文件打包解压_tar_zip

文章目录 &#x1f4d1;引言&#xff1a;一、文件打包压缩1.1 什么是文件打包压缩&#xff1f;1.2 为什么需要文件打包压缩&#xff1f; 二、打包解压2.1 zip2.2 unzip2.3 tar指令 &#x1f324;️全篇小结&#xff1a; &#x1f4d1;引言&#xff1a; 在Linux操作系统中&#…

简单易懂的下载学浪视频教程- 小浪助手

接下来我就教大家如何通过小浪助手&#xff0c;轻松下载你想要下载的学浪app视频 首先准备好小浪助手 工具我已经打包好了&#xff0c;有需要的自己取一下 学浪下载器链接&#xff1a;https://pan.baidu.com/s/1djUmmnsfLEt_oD2V7loO-g?pwd1234 提取码&#xff1a;1234 -…

LLaMA3(Meta)微调SFT实战Meta-Llama-3-8B-Instruct

LlaMA3-SFT LlaMA3-SFT, Meta-Llama-3-8B/Meta-Llama-3-8B-Instruct微调(transformers)/LORA(peft)/推理 项目地址 https://github.com/yongzhuo/LLaMA3-SFT默认数据类型为bfloat6 备注 1. 非常重要: weights要用bfloat16/fp32/tf32(第二版大模型基本共识), 不要用fp16, f…

Llama 3 基于知识库应用实践(一)

一、概述 Llama 3 是Meta最新推出的开源大语言模型&#xff0c;其8B和13B参数的模型的性能与之前的Llama 2相比实现了质的飞跃。以下是官方给出的模型性能评测对比结果&#xff08;引自&#xff1a;https://ai.meta.com/blog/meta-llama-3/&#xff09;&#xff0c;如Llama 3 …

后端学习记录~~JavaSE篇(Module08-异常 上 )

总览&#xff1a; Java概述&#xff1a; 思维导图文件在本人个人主页上-----资源模块 资源详情&#xff08;免费下载&#xff09;&#xff1a;Java学习思维导图异常篇资源-CSDN文库https://download.csdn.net/download/m0_61589682/89238330 整体展示&#xff1a;

文件上传安全以及防止无限制文件上传

文件上传安全以及防止无限制文件上传 在网络应用中&#xff0c;文件上传是一项常见功能&#xff0c;用户可以通过它上传图片、文档或其他媒体文件。然而&#xff0c;如果没有适当的安全措施&#xff0c;文件上传功能可能成为安全漏洞的源头。本文将探讨文件上传过程中的安全风…

小米汽车充电枪继电器信号

继电器型号&#xff1a; 参考链接 小米SU7&#xff0c;便捷充放电枪拆解 (qq.com)https://mp.weixin.qq.com/s?__bizMzU5ODA2NDg4OQ&mid2247486086&idx1&sn0dd4e7c9f7c72d10ea1c9f506faabfcc&chksmfe48a110c93f2806f6e000f6dc6b67569f6e504220bec14654ccce7d…

秋招后端开发面试题 - JVM底层原理

目录 JVM底层原理前言面试题Java 对象的创建过程&#xff1f;什么是指针碰撞&#xff1f;什么是空闲列表&#xff1f;/ 内存分配的两种方式&#xff1f;JVM 里 new 对象时&#xff0c;堆会发生抢占吗&#xff1f;JVM 是怎么设计来保证线程安全的&#xff1f;/ 内存分配并发问题…

语音识别的基本概念

语音识别的基本概念​​​​​​​ ​​​​​​​ 言语是一种复杂的现象。人们很少了解它是如何产生和感知的。天真的想法常常是语音是由单词构成的&#xff0c;而每个单词又由音素组成。不幸的是&#xff0c;现实却大不相同。语音是一个动态过程&#xff0c;没有明确区分的…

Spring AI聊天功能开发

一、引入依赖 继承父版本的springboot依赖&#xff0c;最好是比较新的依赖。 <parent><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-parent</artifactId><version>3.2.4</version><relativePat…

JS实现对用户名、密码进行正则表达式判断,按钮绑定多个事件,网页跳转

目标&#xff1a;使用JS实现对用户名和密码进行正则表达式判断&#xff0c;用户名和密码正确时&#xff0c;进行网页跳转。 用户名、密码的正则表达式检验 HTML代码&#xff1a; <button type"submit" id"login-btn" /*onclick"login();alidate…

Spring Boot | Spring Boot 实现 “Redis缓存管理“

目录 : Spring Boot 实现 "Redis缓存管理" :一、Spring Boot 支持的 "缓存组件" &#xff08; 如果 “没有” 明确指定使用自定义的 "cacheManager "或 "cacheResolver" &#xff0c;此时 SpringBoot会按照“预先定义的顺序” 启动一个…

免费SSL证书和付费SSL证书区别在哪

SSL证书免费和付费的区别有&#xff1a; 1.证书类型不同&#xff0c;免费SSL证书只有域名验证性型&#xff0c;付费SSL证书有域名验证型、企业验证型和组织验证型&#xff1b; 2.使用限制不同&#xff0c;免费SSL证书只能绑定单个域名、不支持通配符域名、多域名等&#xff0…

4.28java项目小结

这几天完成了用户修改资料模块的功能&#xff0c;实现了修改用户头像&#xff0c;昵称等信息&#xff0c;并且对数据库进行了操作&#xff0c;大致画了好友资料的页面的内容&#xff0c;这两天尽量完成表的创建&#xff0c;建立多对多的关系&#xff0c;实现好友的添加功能。