【小白深度教程 1.6(番外)】手把手教你对点云数据进行地面区域分割(Python 代码)

1. 引言:

最近有一个项目需求(无人驾驶)方面,需要分割出地面和跑道,来进行建图

但是!现在的深度学习方法跑的太慢,无法满足实时需求。

而传统方法教程较少、较乱,并且大多为 C++ 代码,对我这种炼丹师(菜鸡)非常不友好。

因此这里记录一下解决方案,最终效果如下:

在这里插入图片描述

在这里插入图片描述

2. 点云数据和标定文件处理:

我们首先通过 PCD Viewer 查看点云:

在这里插入图片描述
发现点云位置有点奇怪,与图像对应不上:

在这里插入图片描述

这是因为激光雷达传感器的位置,跟 RGB 相机位置并不相同,因此我们首先要进行标定,来获取两个传感器的相对位姿。

万幸之前标定过:

<robot name="vehicle">
  
  <link name="base_link"/>
  <joint name="base_link" type="fixed">
    <parent link="vehicle"/>
    <child link="base_link"/>
    <origin xyz="0.0 0.0 0.0" rpy="0.0 0.0 0.0"/>
  </joint>
  <link name="vehicle"/>
  <joint name="vehicle" type="fixed">
    <parent link="camera_zl"/>
    <child link="vehicle"/>
    <origin xyz="0.26711147670816693 0.752403188145766 -2.2926900679624875" rpy="-1.8178998570948341 -1.524995337811489 -2.9007309812753688"/>
  </joint>

  <link name="camera_bl"/>
  <joint name="camera_bl" type="fixed">
    <parent link="camera_zl"/>
    <child link="camera_bl"/>
    <origin xyz="0.0666022640841149 -0.07191983402396644 -0.8038605249925608" rpy="3.130928216112435 -1.011104185590057 -3.125761138338597"/>
  </joint>
  <link name="camera_br"/>
  <joint name="camera_br" type="fixed">
    <parent link="camera_zl"/>
    <child link="camera_br"/>
    <origin xyz="0.31508974933092165 -0.06844881308276969 -0.7975914385409487" rpy="3.033367386883447 1.136790350587281 3.091039069129502"/>
  </joint>
  <link name="camera_fl"/>
  <joint name="camera_fl" type="fixed">
    <parent link="camera_zl"/>
    <child link="camera_fl"/>
    <origin xyz="-0.07244872559411293 0.002740095394655168 -0.4815423938804117" rpy="0.054676777224247054 -1.0978329300587952 -0.013890888636851888"/>
  </joint>
  <link name="camera_fr"/>
  <joint name="camera_fr" type="fixed">
    <parent link="camera_zl"/>
    <child link="camera_fr"/>
    <origin xyz="0.42413895196331647 -0.0008785938891177506 -0.46606305691530636" rpy="0.01650129586983895 1.0156323899987645 -0.015457555513694873"/>
  </joint>
  <link name="camera_r"/>
  <joint name="camera_r" type="fixed">
    <parent link="camera_zl"/>
    <child link="camera_r"/>
    <origin xyz="0.3079851622899553 0.3031765694838296 -3.2213237367328937" rpy="3.065145406736228 0.030613633447746702 -3.1391999245331332"/>
  </joint>

  <link name="camera_zl"/>

  <link name="camera_zr"/>
  <joint name="camera_zr" type="fixed">
    <parent link="camera_zl"/>
    <child link="camera_zr"/>
    <origin xyz="0.3181843459957608 -0.0022540728801366413 0.016073697009145797" rpy="-0.04337562099208518 -0.10313619847316202 -0.004320097924958878"/>
  </joint>


  <link name="lidar_front"/>
  <joint name="lidar_front" type="fixed">
    <parent link="camera_zl"/>
    <child link="lidar_front"/>
    <origin xyz="0.11156371743249438 -0.015148333508697098 -0.10643051836532358" rpy="0.048645729083959824 0.031505277472651416 -1.5728420649351478"/>
  </joint>
  <link name="lidar_left"/>
  <joint name="lidar_left" type="fixed">
    <parent link="camera_zl"/>
    <child link="lidar_left"/>
    <origin xyz="0.05473934211418374 -0.2372464311776421 -0.6578870124268912" rpy="2.143873331795305 -0.003605912968916103 -1.5671401905346867"/>
  </joint>

  <link name="lidar_right"/>
  <joint name="lidar_right" type="fixed">
    <parent link="camera_zl"/>
    <child link="lidar_right"/>
    <origin xyz="0.05473934211418374 -0.2372464311776421 -0.6578870124268912" rpy="-2.143873331795305 -0.003605912968916103 -1.5671401905346867"/>
  </joint>
  

  <link name="radar_front"/>
  <joint name="radar_front" type="fixed">
    <parent link="camera_zl"/>
    <child link="radar_front"/>
    <origin xyz="0.13748894836135211 -0.08421185553644966 -0.3474210084773697" rpy="1.5572725277007355 -0.048893975755332164 -0.008915615258628964"/>
  </joint>
  <link name="radar_left"/>
  <joint name="radar_left" type="fixed">
    <parent link="camera_zl"/>
    <child link="radar_left"/>
    <origin xyz="0.058702781488714884 0.008377072495749094 -0.37647726942386783" rpy="-1.914662121011925 -1.5282418896055323 -2.800426427824062"/>
  </joint>
  <link name="radar_right"/>
  <joint name="radar_right" type="fixed">
    <parent link="camera_zl"/>
    <child link="radar_right"/>
    <origin xyz="0.30545587935186047 -0.009719793696323697 -0.3876668640506817" rpy="1.7957432523299284 1.5212984965167347 0.21575083611060633"/>
  </joint>
  <link name="radar_back"/>
  <joint name="radar_back" type="fixed">
    <parent link="camera_zl"/>
    <child link="radar_back"/>
    <origin xyz="0.32483836151806866 0.3956804112991841 -3.1568597714353963" rpy="-1.5936611005990984 0.053218225749967596 -3.1403390858259437"/>
  </joint>
</robot>

相机和激光雷达等多个传感器的位姿,就存储在这个文件中:calib.urdf

我们可以利用标定文件,获取相关信息。

我们需要的是:lidar_front 到 camera_zl 和 camera_zl 到 vehicle 的配置文件中的参数。

这样就可以把激光雷达(点云)左边,首先转换成 zl 相机的坐标,再最后转换成车辆坐标系(以车辆为中心):


translation_lidar_to_cam = np.array([0.11156371743249438, -0.015148333508697098, -0.10643051836532358])
rotation_lidar_to_cam = np.array([0.048645729083959824, 0.031505277472651416, -1.5728420649351478]) # RPY

translation_cam_to_vehicle = np.array([0.26711147670816693, 0.752403188145766, -2.2926900679624875])
rotation_cam_to_vehicle = np.array([-1.8178998570948341, -1.524995337811489, -2.9007309812753688]) # RPY

其中,translation 和 rotation 分别表示平移和旋转。

3. 点云坐标转换公式:

要将激光雷达点云转换到车辆坐标系,需要将点云依次经过两个坐标变换:首先是从激光雷达坐标系转换到相机坐标系,然后从相机坐标系转换到车辆坐标系。这个过程可以用齐次变换矩阵来表示。

3.1. 坐标变换简介

在三维空间中,坐标变换通常包括旋转和平移,可以通过齐次变换矩阵来表示。齐次变换矩阵是一个 (4 \times 4) 的矩阵,前 (3 \times 3) 部分表示旋转,最后一列的前三个元素表示平移。变换矩阵 (T) 的一般形式为:

T = [ R t 0 1 ] T = \begin{bmatrix} R & t \\ 0 & 1 \end{bmatrix} T = [ R 0 t 1 ]

其中:

  • R R R 3 × 3 3 \times 3 3 × 3 的旋转矩阵。
  • t t t 3 × 1 3 \times 1 3 × 1 的平移向量。

3.2. 从旋转矩阵和平移向量构造齐次变换矩阵

给定旋转向量(Roll-Pitch-Yaw,RPY)和平移向量,可以构造齐次变换矩阵。

3.2.1 从 RPY 角度构造旋转矩阵

旋转向量给出了绕 x(Roll)、y(Pitch)、z(Yaw)轴的旋转角度,可以通过以下公式构造旋转矩阵:

  1. 绕 x 轴的旋转矩阵(Roll):
    R x ( roll ) = [ 1 0 0 0 cos ⁡ ( roll ) − sin ⁡ ( roll ) 0 sin ⁡ ( roll ) cos ⁡ ( roll ) ] R_x(\text{roll}) = \begin{bmatrix} 1 & 0 & 0 \\ 0 & \cos(\text{roll}) & -\sin(\text{roll}) \\ 0 & \sin(\text{roll}) & \cos(\text{roll}) \end{bmatrix} R x ( roll ) = 1 0 0 0 cos ( roll ) sin ( roll ) 0 sin ( roll ) cos ( roll )

  2. 绕 y 轴的旋转矩阵(Pitch):
    R y ( pitch ) = [ cos ⁡ ( pitch ) 0 sin ⁡ ( pitch ) 0 1 0 − sin ⁡ ( pitch ) 0 cos ⁡ ( pitch ) ] R_y(\text{pitch}) = \begin{bmatrix} \cos(\text{pitch}) & 0 & \sin(\text{pitch}) \\ 0 & 1 & 0 \\ -\sin(\text{pitch}) & 0 & \cos(\text{pitch}) \end{bmatrix} R y ( pitch ) = cos ( pitch ) 0 sin ( pitch ) 0 1 0 sin ( pitch ) 0 cos ( pitch )

  3. 绕 z 轴的旋转矩阵(Yaw):
    R z ( yaw ) = [ cos ⁡ ( yaw ) − sin ⁡ ( yaw ) 0 sin ⁡ ( yaw ) cos ⁡ ( yaw ) 0 0 0 1 ] R_z(\text{yaw}) = \begin{bmatrix} \cos(\text{yaw}) & -\sin(\text{yaw}) & 0 \\ \sin(\text{yaw}) & \cos(\text{yaw}) & 0 \\ 0 & 0 & 1 \end{bmatrix} R z ( yaw ) = cos ( yaw ) sin ( yaw ) 0 sin ( yaw ) cos ( yaw ) 0 0 0 1
    旋转矩阵 R R R 可以通过矩阵乘法 R = R z × R y × R x R = R_z \times R_y \times R_x R = R z × R y × R x 得到。

3.2.2 构造齐次变换矩阵

给定旋转矩阵 R R R 和平移向量 t t t ,可以构造齐次变换矩阵:

T = [ R t 0 1 ] T = \begin{bmatrix} R & t \\ 0 & 1 \end{bmatrix} T = [ R 0 t 1 ]

3.3. 构造转换矩阵

给定数据可以构造以下两个变换矩阵:

3.3.1 激光雷达到相机坐标系的变换矩阵 T lidar_to_cam T_{\text{lidar\_to\_cam}} T lidar_to_cam
  1. 构造旋转矩阵 R lidar_to_cam R_{\text{lidar\_to\_cam}} R lidar_to_cam

    • 旋转角度为: RPY = [ 0.0486 , 0.0315 , − 1.5728 ] \text{RPY} = [0.0486, 0.0315, -1.5728] RPY = [ 0.0486 , 0.0315 , 1.5728 ]
  2. 构造平移向量 t lidar_to_cam t_{\text{lidar\_to\_cam}} t lidar_to_cam

    • 平移向量为: t = [ 0.1116 , − 0.0151 , − 0.1064 ] t = [0.1116, -0.0151, -0.1064] t = [ 0.1116 , 0.0151 , 0.1064 ]
  3. 齐次变换矩阵 T lidar_to_cam T_{\text{lidar\_to\_cam}} T lidar_to_cam 为:

T lidar_to_cam = [ R lidar_to_cam t lidar_to_cam 0 1 ] T_{\text{lidar\_to\_cam}} = \begin{bmatrix} R_{\text{lidar\_to\_cam}} & t_{\text{lidar\_to\_cam}} \\ 0 & 1 \end{bmatrix} T lidar_to_cam = [ R lidar_to_cam 0 t lidar_to_cam 1 ]

3.3.2 相机到车辆坐标系的变换矩阵 T cam_to_vehicle T_{\text{cam\_to\_vehicle}} T cam_to_vehicle
  1. 构造旋转矩阵 R cam_to_vehicle R_{\text{cam\_to\_vehicle}} R cam_to_vehicle

    • 旋转角度为: RPY = [ − 1.8179 , − 1.5250 , − 2.9007 ] \text{RPY} = [-1.8179, -1.5250, -2.9007] RPY = [ 1.8179 , 1.5250 , 2.9007 ]
  2. 构造平移向量 t cam_to_vehicle t_{\text{cam\_to\_vehicle}} t cam_to_vehicle

    • 平移向量为: t = [ 0.2671 , 0.7524 , − 2.2927 ] t = [0.2671, 0.7524, -2.2927] t = [ 0.2671 , 0.7524 , 2.2927 ]
  3. 齐次变换矩阵 T cam_to_vehicle T_{\text{cam\_to\_vehicle}} T cam_to_vehicle 为:

T cam_to_vehicle = [ R cam_to_vehicle t cam_to_vehicle 0 1 ] T_{\text{cam\_to\_vehicle}} = \begin{bmatrix} R_{\text{cam\_to\_vehicle}} & t_{\text{cam\_to\_vehicle}} \\ 0 & 1 \end{bmatrix} T cam_to_vehicle = [ R cam_to_vehicle 0 t cam_to_vehicle 1 ]

3.4. 合成总变换矩阵

将激光雷达到相机的变换矩阵与相机到车辆的变换矩阵相乘,得到激光雷达到车辆的总变换矩阵:

T lidar_to_vehicle = T cam_to_vehicle × T lidar_to_cam T_{\text{lidar\_to\_vehicle}} = T_{\text{cam\_to\_vehicle}} \times T_{\text{lidar\_to\_cam}} T lidar_to_vehicle = T cam_to_vehicle × T lidar_to_cam

3.5. 点云转换

给定激光雷达点云中的一个点 P lidar P_{\text{lidar}} P lidar 表示为齐次坐标形式 [ x , y , z , 1 ] T [x, y, z, 1]^T [ x , y , z , 1 ] T ,通过总变换矩阵转换到车辆坐标系:

P vehicle = T lidar_to_vehicle × P lidar P_{\text{vehicle}} = T_{\text{lidar\_to\_vehicle}} \times P_{\text{lidar}} P vehicle = T lidar_to_vehicle × P lidar

这就是将激光雷达点云转换到车辆坐标系的过程和原理。

4. 高度过滤方法:

转换后我们就可以发现,地面的高度基本在一个水平线上,因此我们可以通过简单的高度过滤,来实现地面分割:

import numpy as np
from scipy.spatial.transform import Rotation as R
from pclpy import pcl
import open3d as o3d

def construct_transformation_matrix(translation, rotation):
    """
    Construct a 4x4 transformation matrix from translation and rotation (roll, pitch, yaw).
    """
    # Convert rotation from RPY (Roll, Pitch, Yaw) to a rotation matrix
    rotation_matrix = R.from_euler('xyz', rotation, degrees=False).as_matrix()
    # Construct the transformation matrix
    transformation_matrix = np.eye(4)
    transformation_matrix[:3, :3] = rotation_matrix
    transformation_matrix[:3, 3] = translation
    return transformation_matrix

def apply_transformation(pcd, transformation_matrix):
    """
    Apply a 4x4 transformation matrix to a point cloud.
    """
    # Convert the point cloud to homogeneous coordinates
    points_homogeneous = np.hstack((pcd, np.ones((pcd.shape[0], 1))))
    # Apply the transformation
    points_transformed_homogeneous = points_homogeneous @ transformation_matrix.T
    # Convert back to Cartesian coordinates
    pcd_transformed = points_transformed_homogeneous[:, :3]
    return pcd_transformed

# Parameters from the configuration file for lidar_front to camera_zl and camera_zl to vehicle
translation_lidar_to_cam = np.array([0.11156371743249438, -0.015148333508697098, -0.10643051836532358])
rotation_lidar_to_cam = np.array([0.048645729083959824, 0.031505277472651416, -1.5728420649351478]) # RPY

translation_cam_to_vehicle = np.array([0.26711147670816693, 0.752403188145766, -2.2926900679624875])
rotation_cam_to_vehicle = np.array([-1.8178998570948341, -1.524995337811489, -2.9007309812753688]) # RPY

# Construct the transformation matrices
transformation_matrix_lidar_to_cam = construct_transformation_matrix(translation_lidar_to_cam, rotation_lidar_to_cam)
transformation_matrix_cam_to_vehicle = construct_transformation_matrix(translation_cam_to_vehicle, rotation_cam_to_vehicle)

# Combine transformations to get a single transformation from lidar_front to vehicle
transformation_matrix_lidar_to_vehicle = transformation_matrix_cam_to_vehicle @ transformation_matrix_lidar_to_cam

# This matrix can be used to transform point cloud data from lidar_front to vehicle coordinates.
print(transformation_matrix_lidar_to_vehicle)

import open3d as o3d

# 假设你的PCD文件路径是 'path_to_your_pcd_file.pcd'
pcd_path = '1702985831-449795584.pcd'
# 读取PCD文件
pcd = o3d.io.read_point_cloud(pcd_path)

# 将点云数据转换为NumPy数组
pcd_np = np.asarray(pcd.points)

# 应用从lidar_front到vehicle的转换
pcd_transformed_np = apply_transformation(pcd_np, transformation_matrix_lidar_to_vehicle)

# 创建一个新的点云对象,用于保存转换后的数据
pcd_transformed = o3d.geometry.PointCloud()
pcd_transformed.points = o3d.utility.Vector3dVector(pcd_transformed_np)

# 保存转换后的点云到新的PCD文件
o3d.io.write_point_cloud('transformed_pcd.pcd', pcd_transformed)

print("Point cloud transformation complete and saved to 'transformed_pcd.pcd'")

cl, ind = pcd_transformed.remove_statistical_outlier(nb_neighbors=80, std_ratio=2.0)

# 使用返回的索引选择保留的点
cleaned_pcd = pcd_transformed.select_by_index(ind)

# 保存清理后的点云到新的PCD文件
o3d.io.write_point_cloud('cleaned_transformed_pcd.pcd', cleaned_pcd)

print("Outliers removed and the cleaned point cloud is saved to 'cleaned_transformed_pcd.pcd'")


# 加载PCD文件
cloud = pcl.PointCloud.PointXYZ()
pcl.io.loadPCDFile('cleaned_transformed_pcd.pcd', cloud)
    
# 将点云数据转换为NumPy数组
points = np.zeros((cloud.size(), 3), dtype=np.float32)
print("points:", points.shape)
for i, point in enumerate(cloud.points):
    points[i] = [point.x, point.y, point.z]
        
x = points[:, 1]
valid = x > -30
    
z = points[:, 0][valid]
x = points[:, 1][valid]
y = points[:, 2][valid] * -1
colors = z - np.min(z)  # 高度决定颜色

pcd = o3d.geometry.PointCloud()
points_np = np.stack([x, y, z], -1)

# 将NumPy数组中的点赋值给点云对象
pcd.points = o3d.utility.Vector3dVector(points_np)
# 保存清理后的点云到新的PCD文件
o3d.io.write_point_cloud('final.pcd', pcd)

在这里插入图片描述

5. 法线聚类方法:

在点云处理领域中,法线聚类方法也是一种常用的地面分割技术。该方法通过分析点云中点的法线方向来识别地面点和非地面点。

以下是法线聚类方法分割地面的原理和公式:

5.1. 法线聚类方法的基本原理

法线聚类方法的核心思想是利用点云中每个点的法线方向来识别地面。通常情况下,地面的法线方向较为一致,接近垂直于重力方向(通常指向 Z 轴的方向),而非地面点的法线方向则会有较大偏差。因此,通过聚类法线方向并检测出与地面方向一致的聚类,可以将地面和非地面分割开。

5.2. 法线计算

给定点云中某个点 p i p_i p i ,其法线可以通过它周围的一组邻近点来估计。假设点 p i p_i p i 的邻域点集为 N ( p i ) \mathcal{N}(p_i) N ( p i ) ,法线计算步骤如下:

  1. 邻域点集计算 :通过 k-最近邻(k-NN)或半径搜索获取点 p i p_i p i 的邻域点集 N ( p i ) \mathcal{N}(p_i) N ( p i )

  2. 协方差矩阵构造 :构造邻域点的协方差矩阵 C C C ,计算公式为:

    C = 1 ∣ N ( p i ) ∣ ∑ p j ∈ N ( p i ) ( p j − p ˉ ) ( p j − p ˉ ) T C = \frac{1}{|\mathcal{N}(p_i)|} \sum_{p_j \in \mathcal{N}(p_i)} (p_j - \bar{p})(p_j - \bar{p})^T C = N ( p i ) 1 p j N ( p i ) ( p j p ˉ ) ( p j p ˉ ) T

    其中, p ˉ \bar{p} p ˉ 是邻域点的质心:

    p ˉ = 1 ∣ N ( p i ) ∣ ∑ p j ∈ N ( p i ) p j \bar{p} = \frac{1}{|\mathcal{N}(p_i)|} \sum_{p_j \in \mathcal{N}(p_i)} p_j p ˉ = N ( p i ) 1 p j N ( p i ) p j

  3. 法线计算 :对协方差矩阵 C C C 进行特征值分解,得到特征值和对应的特征向量。最小的特征值对应的特征向量即为该点的法线方向 n i \mathbf{n}_i n i

5.3. 法线方向聚类

为了区分地面点和非地面点,需要对点云中所有点的法线方向进行聚类。通常采用如下步骤:

  1. 法线方向编码 :将法线方向 n i = ( n x , n y , n z ) \mathbf{n}_i = (n_x, n_y, n_z) n i = ( n x , n y , n z ) 进行编码,常用的是将法线映射到球面坐标系,用两角度表示(方位角和俯仰角)。

  2. 聚类 :使用聚类算法(如 K-means、DBSCAN 等)对法线方向进行聚类。地面点的法线聚类中心通常靠近与重力方向相反的方向(如 n ground ≈ [ 0 , 0 , 1 ] \mathbf{n}_{\text{ground}} \approx [0, 0, 1] n ground [ 0 , 0 , 1 ] )。

5.4. 地面点识别

识别出与地面方向相似的法线聚类后,可以进一步筛选地面点:

  1. 法线方向筛选 :与地面法线方向接近的点被认为是地面点。常用的方法是计算法线与重力方向之间的夹角,满足一定阈值条件的点被标记为地面点。

    θ = arccos ⁡ ( n i ⋅ n ground ) \theta = \arccos(\mathbf{n}_i \cdot \mathbf{n}_{\text{ground}}) θ = arccos ( n i n ground )

    θ \theta θ 小于设定阈值(如 (10^\circ)),则该点被认为是地面点。

  2. 高度筛选 :结合点的高度信息进一步剔除误识别点。假设地面高度范围已知,可以对高度做限制。

5.5. 最终地面分割

经过上述步骤,法线方向与地面方向接近且高度符合要求的点被认为是地面点,其余点被认为是非地面点。这个过程可以通过迭代调整法线阈值和高度条件来优化分割效果。

5.6. Python 代码

import numpy as np
from pclpy import pcl
from mayavi import mlab
import numpy as np

def visualize_point_clouds(original_cloud, ground_cloud):
    # 从原始点云中提取x, y, z坐标
    original_points = np.asarray(original_cloud.xyz)
    # 从地面点云中提取x, y, z坐标
    ground_points = np.asarray(ground_cloud.xyz)
    
    # 创建一个新的场景
    fig = mlab.figure(bgcolor=(0, 0, 0), size=(800, 600))
    
    # 绘制原始点云,颜色为白色
    mlab.points3d(original_points[:, 0], original_points[:, 1], original_points[:, 2], color=(1, 1, 1), mode="point", figure=fig)
    
    # 绘制地面点云,颜色为绿色
    mlab.points3d(ground_points[:, 0], ground_points[:, 1], ground_points[:, 2], color=(0, 1, 0), mode="point", figure=fig)
    
    mlab.show()

def load_point_cloud_from_bin(file_path):
    """从.bin文件加载点云数据"""
    # 假设每个点由4个float32组成(x, y, z, 反射率)
    points = np.fromfile(file_path, dtype=np.float32).reshape(-1, 4)
    # 创建PCL点云对象
    point_cloud = pcl.PointCloud.PointXYZ()

    for point in points:
        # 创建一个新的PointXYZ点
        pt = pcl.point_types.PointXYZ()
        # 设置点的坐标
        pt.x, pt.y, pt.z = point[0], point[1], point[2]
        # 将点添加到点云中
        point_cloud.push_back(pt)
    
    return point_cloud

# 以下是处理流程的其余部分,与前面的示例相同
def estimate_normals(point_cloud, radius=0.03):
    """估计点云中各点的法线"""
    ne = pcl.features.NormalEstimation.PointXYZ_Normal()
    tree = pcl.search.KdTree.PointXYZ()
    normals = pcl.PointCloud.Normal()

    ne.setInputCloud(point_cloud)
    ne.setSearchMethod(tree)
    ne.setRadiusSearch(radius)  # 设置用于估计法线的球体半径
    ne.compute(normals)
    return normals

def segment_ground(point_cloud, normals, distance_threshold=0.02):
    """使用法线信息进行地面分割"""
    seg = pcl.segmentation.SACSegmentationFromNormals.PointXYZ_Normal()
    seg.setModelType(0)
    seg.setMethodType(0)
    seg.setDistanceThreshold(distance_threshold)
    seg.setInputCloud(point_cloud)
    seg.setInputNormals(normals)

    inliers = pcl.PointIndices()
    coefficients = pcl.ModelCoefficients()
    seg.segment(inliers, coefficients)

    # 提取地面点云
    extract = pcl.filters.ExtractIndices.PointXYZ()
    extract.setInputCloud(point_cloud)
    extract.setIndices(inliers)
    extract.setNegative(False)  # False表示只提取地面点
    ground_cloud = pcl.PointCloud.PointXYZ()
    extract.filter(ground_cloud)
    return ground_cloud

# 示例用法
file_path = "xxx.bin"  # 替换为你的点云文件路径
point_cloud = load_point_cloud_from_bin(file_path)

if point_cloud:
    normals = estimate_normals(point_cloud)
    ground_cloud = segment_ground(point_cloud, normals)
    # 可以进一步处理ground_cloud或保存到文件
    # pcl.io.savePLYFile("ground.ply", ground_cloud)

    # 使用上面的函数可视化原始点云和地面点云
    # 假设`point_cloud`是你加载的原始点云,`ground_cloud`是分割出的地面点云
    visualize_point_clouds(point_cloud, ground_cloud)