@咸菜 注意镜像要求单独的boot分区
ROS Group 产品服务
Product Service 开源代码库
Github 官网
Official website 技术交流
Technological exchanges 激光雷达
LIDAR ROS教程
ROS Tourials 深度学习
Deep Learning 机器视觉
Computer Vision
weijiz 发布的帖子
-
小强开发版系统镜像下载
本教程已经过时,请看
这篇文章的内容
使用时请注意选择合适自己设备的镜像
里面包含了jade版本的ros以及各种针对小强的配置和优化。如果系统被损坏可以尝试用此镜像还原系统。
系统镜像
生成于2017-3-3镜像使用方法
下面以在虚拟机中如何安装此镜像为例说明镜像的使用方法。
在虚拟机中安装小强系统镜像后,请关闭开机启动项,避免与小强冲突sudo service startup stop rosrun robot_upstart uninstall startup
下载镜像
从以上的链接中下载小强镜像。下载完成之后别忘了进行md5校验
创建虚拟机
如图所示操作依次选择。
给虚拟机添加iso文件
开始安装
启动虚拟机
如上图类似设置,注意为了保证程序正常执行,根据系统安装位置选择对应设置
在小强主机上操作时:用户名一定要是xiaoqiang
,计算机名一定要是xiaoqiang-desktop
在自己电脑上操作时:用户名一定要设为xiaoqiang
,计算机名设为bluewhale-client
下一步进行分区,注意勾选上
Transfer user configuration and data files
点击下一步进入分区界面。
选中你要安装的硬盘,然后点击Delete按钮再次选择硬盘中的新分区
点击箭头进入下一步
再次选择刚才新建的分区
设置右侧的Mount point,再次点击箭头进入下一步等待安装完成就可以了
安装完成之后
安装后重启。安装完成之后可能会提示有一些错误,这是由于里面的一些残留文件导致的,可以把这些文件删除。在终端输入
sudo rm -rf /var/crash/*
然后再次重启,就可以了
在虚拟机中安装小强系统镜像的用户,请关闭开机启动项,避免与小强冲突
sudo service startup stop rosrun robot_upstart uninstall startup
Enjoy it
-
使用systemback制作Ubuntu自定义系统镜像和系统备份
Systemback是一个Ubuntu系统中用于发布自定义系统镜像和系统备份的软件。有时候我们对自己的Ubuntu做了很多设置,比如各种软件包,各种自定义的配置。我们想要在另一台电脑上也安装一个和我们一模一样的系统,这个时候就会用到这种方法了。这个方法不仅可以用于发布系统,也可以用来作为系统备份使用。
下面就具体介绍一下这个软件的安装和使用方法
安装
系统版本小于等于Ubuntu 16.04 添加下面的软件源
sudo add-apt-repository ppa:nemh/systemback sudo apt-get update && sudo apt-get install systemback unionfs-fuse
对应系统版本在Ubuntu 18.04及以上的时候,按照这里的说明添加软件源
使用方法
安装完成后在Dash菜单中就能找到这个软件了
输入管理员密码,打开后界面如下图所示
如果我们需要创建系统备份,点击Create new 就可以了。下面介绍一下自定义系统的iso文件如何制作。
- 点击右侧的Live system create按钮,出现界面如下图所示
- 勾选左侧的include the user data files,这样自己主文件夹内的文件都会被包含在系统镜像中。很多相关的程序的配置文件都是保存在主文件夹内的。Working Directory是设置工作目录,程序运行时产生的临时文件都会被保存在这里。所以一定要保证这里有足够的存储空间。
- 点击Create New按钮就开始创建了,等待创建完成。完成后界面如下图所示
右侧的列表中就是已经创建的备份。我已经创建了两个相关的备份,所以有两个在右侧显示。此时文件没有转换成iso格式,选中你要转换的备份,点击convert to ISO 就可以开始转换了。转换完成后,在你的工作目录下就能找到生成的iso文件。
这个文件就可以用来安装系统了。同时还可以作为live系统来使用。
- 点击右侧的Live system create按钮,出现界面如下图所示
-
小强ROS机器人教程(6)___小强图传遥控Windows客户端
2017年3月份之前收到小强的用户,安装前请参考帖子升级软件包以支持小强图传遥控app
目前Windows客户端已改为伽利略导航系统客户端,使用方法可以参照
伽利略导航系统使用手册图传显示和控制界面
软件安装包下载连接
下载后双击安装,根据提示一直点继续就可以了。
初次使用电脑会有如下提示,请勾选“专有网络和公用网络”后,点击选择“允许访问”
疑难解答
Q: 应用启动后无法连接
A: 可能是由于小强和你的电脑没有处在同一局域网下。也有可能是小强的服务端程序没有启动。可以输入 sudo service startup restart重启服务程序后再尝试。Q: 成功连接后无法遥控
A: 检查底盘驱动程序是否正常运行。检查底盘串口USB是否正常连接。然后输入bwcheck
,看看自检是否有错误输出。如果自检正常但是还是无法移动,请检查红外是否触发。触发时红外传感器会发红光。Q: 成功连接后没有图传视频
A: 检查摄像头USB是否连接正常。然后重启服务后再试一次 sudo service startup restart。输入bwcheck
,如果自检数据正常,此时仍然没有图像显示则说明客户端安装异常。请检查一下客户端的安装步骤。Q: 软件提示获取证书
A: 由于现在的服务端已经升级成伽利略导航版本,所以使用时需要证书。小强用户可以根据提示联系客服免费获取。 -
systemd的简单使用
在linux系统中,经常需要开机自动启动一些程序,长时间的在后台运行。比如网站的后台程序,数据库程序等等。这些长时间运行的程序在系统中被称之为服务。对于服务的管理操作系统中有一套很完善的管理程序。可以保证服务开机后自动启动,服务程序崩溃后自动重启。对于更复杂的有服务程序间有依赖的情况也可以轻松的处理。下面就介绍一下这个管理程序如何使用。
以Ubuntu为例,Ubuntu在16.04之前的服务管理程序是upstart,在16.04之后都换成了systemd,两者的使用方法也比较类似,从一个系统迁移到另一个系统也比较容易。下面就介绍一下systemd的简单使用方法。
比如我们想要长时间的在后台执行一个程序,这个程序的路径是
/home/oumeng/Documents/SharpLink/SharpLink/bin/Debug/SharpLink.exe
我们想要这个程序开机后自动启动,那么就可以创建一个叫做toxserver.service的文件,文件内容如下[Unit] Description=Tox Server Daemon After=basic.target [Service] WorkingDirectory=/home/oumeng/Documents/SharpLink/SharpLink/bin/Debug ExecStart=/home/oumeng/Documents/SharpLink/SharpLink/bin/Debug/SharpLink.exe Restart=always [Install] WantedBy=basic.target
这个文件的意义也是很明确的。首先在Unit部分中Description是服务程序的介绍,After是服务程序的执行条件。basic.target是一个系统定义的Unit。包含了系统启动相关的程序。
After=basic.target
的含义就是开机后自动执行。然后就是Service的部分。
WorkingDirectory
定义了环境目录,ExecStart
就是这个服务程序的可执行文件路径。Restart
是说服务崩溃的时候要不要重启。这里是设置成了Always,也就是服务要不断的重启。最后Install部分定义了这个服务的依赖关系。我们这个例子只依赖于
basic.target
。这个把文件保存在
/usr/lib/systemd/system
文件夹内就可以了。但是默认服务是没有Enable的。需要执行一条指令Enable一下才能正常使用。
sudo systemctl enable toxserver.service
启动服务程序
sudo service toxserver start
查看服务运行状态
sudo service toxserver status
执行结果如下图
关闭服务
sudo service toxserver stop
禁用服务
sudo systemctl disable toxserver.service
查看服务程序的日志文件
journalctl -u toxserver
-
小强ROS机器人教程(5)___小强手机遥控app安卓版
新版小强手机APP
下载地址
说明:
增加导航控制,建图等功能文档说明
此文档为送餐机器人说明。但是此APP与小强版本只有少量区别,可以参照使用。使用方法
- 保证小强和遥控用手机在同一局域网内
- 在小强上开启服务端程序(默认自动启动)
- 打开app,如果一切正常就可以看到小强的电压显示和小强的图像数据了。如果没有数据可以尝试点击重连按钮。
疑难解答
Q: 应用启动后无法连接
A: 可能是由于小强和你的手机没有处在同一局域网下。也有可能是小强的服务端程序没有启动。可以输入sudo service startup restart
重启服务程序后再尝试。Q: 成功连接后无法遥控
A: 检查底盘驱动程序是否正常运行。检查底盘串口USB是否正常连接。然后输入bwcheck
,等待自检完成查看自检错误。如果自检正常但是还是无法移动,请检查红外是否触发。触发时红外传感器会发红光或黄光。Q: 成功连接后没有图传视频
A: 检查摄像头USB是否连接正常。然后重启服务后再试一次sudo service startup restart
-
RE: rosbag play超大bag文件时,打开速度太慢问题
默认的rqt_bag是有bug的,虽然能打开bag但是不能发布topic,需要下载rqt_bag的源码然后重新编译才能用。
-
RE: kinetic opencv cmake.conf 文件的bug修复
升级到OpenCV3还是有很多好处的,首先代码基本上和2是兼容的,然后3的很多方法都开了显卡加速,运行效率上有很大提升,所以kinetic就升级到OpenCV3了。
-
利用网络传输系统的声音
为什么要做这件事呢,因为我的电脑耳机插孔不太好使。插上耳机后声音经常的是左边有声音右边没有这样子。在玩游戏的时候这样太不爽了。 而且像VNC这种远程软件是没有声音的,用起来也很不舒服。所以我很需要一个能够远程传输声音的软件。现在有了代码在Github上,虽然还有一些问题,但是基本上能用了,而且还是跨平台的。
下面说一下具体的实现方法。
基本的流程就是从声卡获取到声音信息然后写入到http流里面。这样通过浏览器就可以听到系统的声音了。程序用C#写的,我很喜欢这种语言。声音的获取和转码处理方面用的是一个叫做cscore的库。把http流传给cscore里面对应的API就可以了。但是由于HttpResponse的流是只写的,而cscore里面转码需要的输出流是可读可写的,所以这里要修改cscore的源代码才行。贴上比较关键的代码吧
在C# web api里面如何处理流using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Net; using System.Net.Http; using System.Net.Http.Headers; using System.Text; using System.Threading.Tasks; using System.Web.Http; namespace RemoteAudio.Controllers { public class AudioController : ApiController { [Route("audio")] [HttpGet] public HttpResponseMessage Get() { var response = Request.CreateResponse(); response.Content = new PushStreamContent((Action<Stream, HttpContent, TransportContext>)(AudioServer.getInstance().WriteToStream), new MediaTypeHeaderValue("audio/mpeg")); response.Headers.Add("Cache-Control", "no-cache"); return response; } //[Route("web")] //[HttpGet] //public HttpResponseMessage Get() //{ // var response = Request.CreateResponse(); // return response; //} } }
关键在于以上代码中的
PushStreamContent
方法。在传进去的回调函数中第一个参数就是HttpResponse的流,把这个流传给修改后的cscore api就可以了。using (var encoder = MediaFoundationEncoder.CreateMP3Encoder(capture.WaveFormat, httpStream, 48000)) { capture.DataAvailable += (s, e) => { encoder.Write(e.Data, e.Offset, e.ByteCount); }; }
用起来也是很简单的。
现有的问题
- 会有大约三秒的延时。用来听歌之类的是没问题,但是用来玩游戏,绝对不能忍。个人感觉是编码函数里面做了buffer。以后要进行改进。还有就是浏览器按下暂停键后是用浏览器的缓存继续播放,这个和服务器正在传输的流之间的时间差就会增加。这个声音同步应该在浏览器中用js实现呢还是应该在服务器通过一些手段实现呢?还没想好。
- 没有回收httpResponse stream。这个也是要改进的地方,不然会内存泄露。
-
kinetic opencv cmake.conf 文件的bug修复
在 kinetic 版本的 ros 中,系统自带了OpenCV 3.1。但是如果直接通过cmake文件引用的话可能会出现错误。
Imported target "opencv_xphoto" includes non-existent path "/usr/include/opencv-3.1.0-dev/opencv" in its INTERFACE_INCLUDE_DIRECTORIES. Possible reasons include: The path was deleted, renamed, or moved to another location. An install or uninstall procedure did not complete successfully. The installation package was faulty and references files it does not provide. CMake Error in m-explore/map_merge/CMakeLists.txt: Imported target "opencv_xphoto" includes non-existent path "/usr/include/opencv-3.1.0-dev/opencv" in its INTERFACE_INCLUDE_DIRECTORIES. Possible reasons include: The path was deleted, renamed, or moved to another location. An install or uninstall procedure did not complete successfully. The installation package was faulty and references files it does not provide.
不止我一个人遇到这个错误。可以看这里, 但是这个是在jade版本的OpenCV 3里面。
最后找到了原因在OpenCV的conf.cmake文件里面的一个配置。在
/opt/ros/kinetic/share/OpenCV-3.1.0-dev/OpenCVConfig.cmake
里面的第144行和116行# Extract the directory where *this* file has been installed (determined at cmake run-time) if(CMAKE_VERSION VERSION_LESS "2.8.12") get_filename_component(OpenCV_CONFIG_PATH "${CMAKE_CURRENT_LIST_FILE}" PATH CACHE) else() get_filename_component(OpenCV_CONFIG_PATH "${CMAKE_CURRENT_LIST_FILE}" DIRECTORY CACHE) endif()
把其中的CACHE去掉,改成下面的样子
# Extract the directory where *this* file has been installed (determined at cmake run-time) if(CMAKE_VERSION VERSION_LESS "2.8.12") get_filename_component(OpenCV_CONFIG_PATH "${CMAKE_CURRENT_LIST_FILE}" PATH) else() get_filename_component(OpenCV_CONFIG_PATH "${CMAKE_CURRENT_LIST_FILE}" DIRECTORY) endif()
如果加
CACHE
,OpenCV的路径就会定位到/usr/
。不加CACHE
就会正确定位。不知道是为什么。原则上说CACHE
只是把路径加到缓存里面,可以提高效率,应该不会出这个问题。难道是由于系统装了其他版本的OpenCV所以CACHE出了问题?经过我搜索发现并不是所有的人都遇到这个问题,看来还是和本地的环境配置有关系。反正如上的方法是可以解决问题
-
安装Nvidia显卡驱动和CUDA
网上看到的,但是原链接
不过这里安装的是CUDA7.5,现在最新的是8.0。可以到官网进行下载,记住一定不要选择deb方式,会出问题。用run文件最好了。如果你已经安装过驱动了,一定在安装CUDA的时候选择不要安装驱动,否则系统的显卡驱动会出问题。In this article, I will share some of my experience on installing NVIDIA driver and CUDA on Linux OS. Here I mainly use Ubuntu as example. Comments for CentOS/Fedora are also provided as much as I can.
Table of Contents
Install NVIDIA Graphics Driver via apt-get
Install NVIDIA Graphics Driver via runfile
Remove Previous Installations (Important)
Download the Driver
Install Dependencies
Creat Blacklist for Nouveau Driver
Stop lightdm/gdm/kdm
Excuting the Runfile
Check the Installation
Common Errors and Solutions
Additional Notes
Install CUDA
Install cuDNN
Table of contents generated with markdown-tocInstall NVIDIA Graphics Driver via apt-get
In Ubuntu systems, drivers for NVIDIA Graphics Cards are already provided in the official repository. Installation is as simple as one command.
For ubuntu 14.04.5 LTS, the latest version is 352. To install the driver, excute sudo apt-get nvidia-352 nvidia-modprobe, and then reboot the machine.
For ubuntu 16.04.1 LTS, the latest version is 361. To install the driver, excute sudo apt-get nvidia-361 nvidia-modprobe, and then reboot the machine.
The nvidia-modprobe utility is used to load NVIDIA kernel modules and create NVIDIA character device files automatically everytime your machine boots up.
It is recommended for new users to install the driver via this way because it is simple. However, it has some drawbacks:
The driver included in official Ubuntu repository is usually not the latest.
There would be some naming conflicts when other repositories (e.g. ones from CUDA) are added to the system.
One has to reinstall the driver after Linux kernel are updated.
Install NVIDIA Graphics Driver via runfileFor advanced user who wants to get the latest version of the driver, get rid of the reinstallation issue caused bby dkms, or using Linux distributions that do not have nvidia drivers provided in the repositories, installing from runfile is recommended.
Remove Previous Installations (Important)
One might have installed the driver via apt-get. So before reinstall the driver from runfile, uninstalling previous installations is required. Executing the following scripts carefully one by one.
sudo apt-get purge nvidia* # Note this might remove your cuda installation as well sudo apt-get autoremove # Recommended if .deb files from NVIDIA were installed # Change 1404 to the exact system version or use tab autocompletion # After executing this file, /etc/apt/sources.list.d should contain no files related to nvidia or cuda sudo dpkg -P cuda-repo-ubuntu1404
Download the Driver
The latest driver for NVIDIA products can always be fetched from NVIDIA’s official website. It is not necessary to select all terms carefully. The driver provided for the same Product Series and Operating System is generally the same. For example, in order to find a driver for a GTX TITAN X graphics card, selecting GeForce 900 Series in Product Series and Linux 64-bit in Operating System is enough.
If you want to down load the driver directly in a Linux shell, the script below would be useful.
cd ~ wget http://us.download.nvidia.com/XFree86/Linux-x86_64/367.57/NVIDIA-Linux-x86_64-367.57.run Detailed installation instruction can be found in the download page via a README hyperlink in the ADDITIONAL INFORMATION tab. I have also summarized key steps below.
Install Dependencies
Software required for the runfile are officially listed here. But this page seems to be stale and not easy to follow.
For Ubuntu, installing the following dependencies is enough.
build-essential – For building the driver
gcc-multilib – For providing 32-bit support
dkms – For providing dkms support
(Optional) xorg and xorg-dev. On a workstation with GUI, this is require but usually have already been installed, because you have already got the graphic display. On headless servers without GUI, this is not a must.
As a summary, excuting sudo apt-get install build-essential gcc-multilib dkms to install all dependencies.Required packages for CentOS are epel-release dkms libstdc++.i686. Execute yum install epel-release dkms libstdc++.i686.
Required packages for Fedora are dkms libstdc++.i686 kernel-devel. Execute dnf install dkms libstdc++.i686 kernel-devel.
Creat Blacklist for Nouveau Driver
Create a file at /etc/modprobe.d/blacklist-nouveau.conf with the following contents:
blacklist nouveau
options nouveau modeset=0
Note: It is also possible for the NVIDIA installation runfile to creat this blacklist file automatically. Excute the runfile and follow instructions when an error realted Nouveau appears.Then,
for Ubuntu 14.04 LTS, reboot the computer;
for Ubuntu 16.04 LTS, excute sudo update-initramfs -u and reboot the computer;
for CentOS/Fedora, excute sudo dracut --force and reboot the computer.
Stop lightdm/gdm/kdmAfter the computer is rebooted. We need to stop the desktop manager before excuting the runfile to install the driver. lightdm is the default desktop manager in Ubuntu. If GNOME or KDE desktop environment is used, installed desktop manager will then be gdm or kdm.
For Ubuntu 14.04 / 16.04, excuting sudo service lightdm stop (or use gdm or kdm instead of lightdm)
For Ubuntu 16.04 / Fedora / CentOS, excuting sudo systemctl stop lightdm (or use gdm or kdm instead of lightdm)
Excuting the RunfileAfter above batch of preparition, we can eventually start excuting the runfile. So this is why I, from the very begining, recommend new users to install the driver via apt-get.
cd ~ chmod +x NVIDIA-Linux-x86_64-367.57.run sudo ./NVIDIA-Linux-x86_64-367.57.run --dkms -s
Note:
option --dkms is used for register dkms module into the kernel so that update of the kernel will not require a reinstallation of the driver. This option should be turned on by default.
option -s is used for silent installation which should used for batch installation. For installation on a single computer, this option should be turned off for more installtion information.
option --no-opengl-files can also be added if non-NVIDIA (AMD or Intel) graphics are used for display while NVIDIA graphics are used for display.
The installer may prompt warning on a system without X.Org installed. It is safe to ignore that based on my experience.
WARNING: nvidia-installer was forced to guess the X library path ‘/usr/lib’ and X module path ‘/usr/lib/xorg/modules’; these paths were not queryable from the system. If X fails to find the NVIDIA X driver module, please install thepkg-config
utility and the X.Org SDK/development package for your distribution and reinstall the driver.
Check the InstallationAfter a succesful installation, nvidia-smi command will report all your CUDA-capable devices in the system.
Common Errors and Solutions
ERROR: Unable to load the ‘nvidia-drm’ kernel module.
One probable reason is that the system is boot from UEFI but Secure Boot option is turned on in the BIOS setting. Turn it off and the problem will be solved.
Additional Notesnvidia-smi -pm 1 can enable the persistent mode, which will save some time from loading the driver. It will have significant effect on machines with more than 4 GPUs.
nvidia-smi -e 0 can disable ECC on TESLA products, which will provide about 1/15 more video memory. Reboot is reqired for taking effect. nvidia-smi -e 1 can be used to enable ECC again.
nvidia-smi -pl can be used for increasing or decrasing the TDP limit of the GPU. Increasing will encourage higher GPU Boost frequency, but is somehow DANGEROUS and HARMFUL to the GPU. Decreasing will help to same some power, which is useful for machines that does not have enough power supply and will shutdown unintendedly when pull all GPU to their maximum load.
-i can be added after above commands to specify individual GPU.
These commands can be added to /etc/rc.local for excuting at system boot.
Install CUDA
Installing CUDA from runfile is much simpler and smoother than installing the NVIDIA driver. It just involves copying files to system directories and has nothing to do with the system kernel or online compilation. Removing CUDA is simply removing the installation directory. So I personally does not recommend adding NVIDIA’s repositories and install CUDA via apt-get or other package managers as it will not reduce the complexity of installation or uninstallation but increase the risk of messing up the configurations for repositories.
The CUDA runfile installer can be downloaded from NVIDIA’s websie. But what you download is a package the following three components:
an NVIDIA driver installer, but usually of stale version;
the actual CUDA installer;
the CUDA samples installer;
To extract above three components, one can execute the runfile installer with --extract option. Then, executing the second one will finish the CUDA installation. Installation of the samples are also recommended because useful tool such as deviceQuery and p2pBandwidthLatencyTest are provided.Scripts for installing CUDA Toolkit are summarized below.
cd ~ wget http://developer.download.nvidia.com/compute/cuda/7.5/Prod/local_installers/cuda_7.5.18_linux.run chmod +x cuda_7.5.18_linux.run ./cuda_7.5.18_linux.run --extract=$HOME sudo ./cuda-linux64-rel-7.5.18-19867135.run
After the installation finishes, configure runtime library.
sudo bash -c "echo /usr/local/cuda/lib64/ > /etc/ld.so.conf.d/cuda.conf" sudo ldconfig
It is also recommended for Ubuntu users to append string /usr/local/cuda/bin to system file /etc/environments so that nvcc will be included in $PATH. This will take effect after reboot.
Install cuDNN
The recommended way for installing cuDNN is to first copy the tgz file to /usr/local and then extract it, and then remove the tgz file if necessary. This method will preserve symbolic links. At last, execute sudo ldconfig to update the shared library cache.
-
升级ORB_SLAM2依赖程序以提升效率
ORB_SALM2的默认依赖是g2o, OpenCV, Eigen。这些软件的新版和作者使用的版本相比都有比较大的更新。比如OpenCV和Eigen在新版里面都可以使用显卡加速。下面介绍怎么对ORB_SLAM2进行修改使其使用最新版本的对应软件。
首先是OpenCV,我使用的是ros自带的3.1版本
如果你是jade系统则执行下面的指令安装OpenCV
sudo apt-get install ros-jade-opencv3
如果你是kinetic系统则执行下面的指令
sudo apt-get install ros-kinetic-opencv3
默认安装的OpenCV的cmake 文件可能会出问题,最好还是按照这篇文章所说的修改一下。
在ORB_SLAM2的cmake文件里面添加OpenCV引用,在ORB_SLAM2的CMakeList文件里面把
find_package(OpenCV 2.4.9 REQUIRED)
更改为
find_package(OpenCV 3.1.0 REQUIRED)
这样OpenCV就添加完成了。
安装Eigen
到这里去下载最新版本的Eigen
然后安装网站的提示进行编译安装就可以了注意如果你之前安装过Eigen,很有可能cmake找不到新的Eigen位置。
更改ORB_SLAM2的cmakelist 文件,
把find_package(Eigen3 3.1.0 REQUIRED)
改成
find_package(Eigen3 3.3.1 REQUIRED)
再次运行cmake,如果cmake没有报错则说明已经成功找到了,如果报错了就说明没有找到。修改的方法也比较简单。
更改ORB_SLAM2/cmake_modules/FindEigen3.cmake文件
在最上面添加set(EIGEN3_INCLUDE_DIR "FALSE")
因为其他版本的Eigen可能会把EIGEN3_INCLUDE_DIR这个变量设值,导致寻找错误,所以这里重新设置成False就没有问题了。
这样Eigen的依赖问题就解决了。安装g2o
g2o的安装是一个比较大的问题,因为作者使用了自己修改的一份g2o。所以如果我们想要使用最新版的g2o就要把作者的修改内容给移植过去。
-
ros系统升级,如何从jade升级到kinetic
现在(2017年一月)大部分人使用的ROS都是基于ubuntu 14.04 的 jade 版本。新的基于16.04的 kinetic版本已经发布了很长时间了,新的系统也比较稳定了。
这篇文章就是介绍如何从jade系统安全的系统升级到kinetic系统。
在升级之前首先要说明一下jade系统和kinetic系统之间的主要区别。我所感受到的主要有两点。- Ubuntu 14.04和Ubuntu 16.04之间的区别。16.04相对于14.04主要的改动在于系统启动管理程序的替换。由原来的upstart替换成systemd。所以原有的启动管理配置文件都要进行修改移植之后才能使用。修改起来也是比较简单的,可以google一下相关的文档。
- 默认使用OpenCV 3
新的ROS系统默认使用OpenCV 3,如果你的原有的程序依赖于OpenCV 2,建议你还是替换到OpenCV 3。否则OpenCV之间的版本配置可能是一个问题。3相对于2,虽然说并不是完全兼容,但是但部分都是可以不用修改直接移植的。而且3默认开启了GPU加速,性能上相对2也有很大的提升。我用ORB_SLAM2程序做了测试,可以在不修改源代码的情况下直接移植到3。
下面就开始正式升级系统了
移除第三方软件源
打开软件包管理器
把这里面的所有第三方的软件源都移除掉
移除jade软件包
在终端输入
sudo apt-get remove ros-jade*
等待删除完成
启动系统更新
在终端输入
sudo update-manager -d
在弹出的窗口中确认升级
然后等待升级完成,这个过程可能会很久
在升级过程中可能会询问你是否保留一些配置文件,一般默认选保留就可以,否则还要重新写配置文件,比较麻烦。恢复第三方软件源
在之前的软件包管理窗口中点击对应的软件源就可以了,注意一定要恢复ROS的源
安装kinetic
在终端输入
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list' sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116 sudo apt-get update sudo apt-get install ros-kinetic-desktop-full
等待安装完成
ROS的初始化
- 修改环境变量
如果你原先在bash的启动配置里面添加了jade的环境变量,就需要把对应的环境变量改成kinetic的。比如下面是我的.bashrc
文件
source /opt/ros/kinetic/setup.bash source /home/randoms/Documents/ros/workspace/devel/setup.sh export ROS_PACKAGE_PATH=/home/randoms/Documents/ros/workspace/src:/home/randoms/Documents/ros/workspace/src/ORB_SLAM2/Examples/ROS:$ROS_PACKAGE_PATH
你要根据自己的环境配置进行修改
- 初始化工作空间
初始化rosdep
sudo rosdep init #如果提示文件已经存在就先把它给删掉 rosdep update
假设你原来的ros工作空间在
/home/randoms/Documents/ros/workspace
。这个文件夹内有src
,devel
,build
三个文件夹,删除其中的build
和devel
文件夹
然后运行编译catkin_make
等待编译完成
一般来说肯定会有编译错误,提示错误的程序一般重新编译一遍就没问题。一般产生错误的原因是软件包依赖没有满足。根据错误提示进行修改就可以了。确认kinetic安装完成
运行
rviz
看看能否执行可能会遇到的问题
系统更新比较头疼的就是驱动问题。可能系统更新完成之后重启发现进不去桌面了。这一般是显卡驱动出问题了。遇到这种情况也不用着急,只要重新装一下显卡驱动就可以了。在grub启动选项中的高级模式里面有
recovery mode
,从里面可以以文本的方式进入系统,这样就可以重新安装驱动了。具体的安装方法每个显卡都不太一样,自己google一下基本就可以解决了。如果你遇到了什么奇怪的问题,也欢迎在下面评论,我会尽量帮助解决的
-
OpenCV 2.X 和 OpenCV 3.X的区别是什么?
原文链接
尽管3相对与2有一些功能上的增加,但是3和2最大的区别还是在速度上。
最关键的不同在于OpenCV 3.x的API。几乎所有的OpenCV 3.X 方法都采用OpenCL加速了。所以所有的能够在GPU上运行的方法在性能上都会有10% - 230%的提升。你的代码所需要做的修改只是用UMat替换Mat。如果你想要在OpenCV 2.X 里面提升性能,你就要分别的触发cv::ocl::* or cv::gpu::*
这些方法才行。如果你是Java开发者那就更好了,现在已经有经过包装的Java类可以使用了。
内部组件的结构也发生了改变,但是从开发者的角度来说,只要修改对应的头文件就可以了。
所以用3.X更好,3.X和2.X是不兼容的,不过可以很容易的移植过去。
-
android无线调试
在android开发的过程中经常要用adb来看程序输出,安装程序之类的。一般是用USB直接连接到电脑上进行开发。但是这样会比较费事。经常插USB线也会损害手机的USB接口。
实际上android是可以无线调试的。这就是wireless adb。如果你的手机是已经root过的,这就非常简单了,在网上直接搜wireless adb 下载对应应用然后给root权限就可以了。具体使用的方式可以按照应用的说明进行。
下面介绍的方法是给没有root过的手机使用的。原方法链接
首先用数据线链接手机,开启手机的USB调试模式
然后在电脑上输入adb tcpip 5555
这条指令是开启adb的tcpip模式,打开端口 5555
然后就可以拔掉手机的USB了,继续在电脑上输入adb tcpip 5555 adb connect 192.168.2.5
其中的ip地址是手机的ip地址,要根据你的手机的具体地址进行替换。
看到下面的输入则说明adb已经成功连接了
* daemon not running. starting it now on port 5037 * * daemon started successfully * connected to 192.168.2.5:5555
现在就可以和用数据线一样对手机进行调试了。
实际上安装的android应用做的就是第一步的工作。把adb监听的tcp端口打开。不安装应用有一点不太方便就是每次都要先用USB连接一次,好处就是手机不用root。
-
rviz的简单使用
rviz是ros自带的一个图形化工具,可以方便的对ros的程序进行图形化操作。其使用也是比较简单。
整体界面如下图所示
界面主要分为左侧的显示设置区域,中间的大的显示区域和右侧的视角设置区域。最上面是和导航相关的几个工具。最下面是ros状态相关的一些数据的显示。
下面以用rviz查看ORB_SLAM2的topic数据为例展示一下rviz的使用方法
启动ORB_SLAM程序
在终端依次输入
roscore roslaunch ORB_SLAM2 map.launch
等待程序成功运行启动运行
这时在终端输入rostopic list
看到如下的输出则说明程序已经成功启动了
添加topic进入rviz
点击rviz左下角的添加按钮,弹出如下图所示的对话框
点击by topic,在下面的列表中选择ORB_SLAM相关的几个topic
这样就可以成功添加了
如果添加后出现如下图所示的错误
这是由于Glabal Options里面的坐标系设置有问题。将其改成对应的坐标系就可以了。其他的各种topic都可以通过这种方式方便的进行添加。
基本操作
中间区域显示的ORB_SLAM程序计算出的三维点云。可以通过鼠标左键拖动进行视角的调整。具体的操作方式在最下面的状态栏里面有提示。
右侧区域可以对视角进行更详细的设置
换个角度看一看保存设置
在配置完成之后,如果不想以后每次都要进行一样的配置,可以把配置文件保存起来。
在最上面的菜单中有保存的选项。更详细的rviz相关信息可以看官方的wiki
-
Ubuntu VNC 如何调整分辨率
VNC是一个跨平台的远程桌面软件。在Linux环境下是非常不错的选择。但是在连接的时候分辨率默认会比较小,怎么进行设置呢。网上大部分是说用-geometry WxH 进行设置,但是这个对于使用Unity桌面环境的Ubuntu来说并不好用。下面介绍一个利用xrandr的方法。
xrandr --fb 1920x1080
先试试这个是不是好使的。后面1920x1080是分辨率。请根据自己的分辨率进行调整。
远程连接过去之后在终端输入
xrandr -s WIDTHxHEIGHT
其中WIDTH是你的分辨率宽度,HEIGHT是你的分辨率高度
比如你是1920x1080的分辨率,直接输入xrandr -s 1920x1080
就可以了。如果这时候提示了
the resolution is not available in 'Display Settings'
那么就要先依次输入下面的指令添加对应的分辨率设置gtf 1920 1080 60 xrandr --newmode "1920x1080_60.00" 172.80 1920 2040 2248 2576 1080 1081 1084 1118 -HSync +Vsync xrandr --addmode VGA1 "1920x1080_60.00" xrandr --output VGA1 --mode "1920x1080_60.00"
然后再输入
xrandr -s 1920x1080
不过上面是1920x1080分辨率的设置,其他分辨率的参数要对应进行调整。
Update
不知道为什么再次使用这个方法的时候就不好使了
xrandr --fb 1920x1080
这个指令是好使的
-
UC Berkeley's Salto Is the Most Agile Jumping Robot Ever
Ron Fearing’s Biomimetic Millisystems Lab at UC Berkeley is famous for its stable of bite-sized bio-inspired robots, and Duncan Haldane is responsible for a whole bunch of them. He’s worked on running robots, robots with wings, robots with tails, and even robots with hairs, in case that’s your thing. What Haldane and the other members of the lab are especially good at is looking to some of the most talented and capable animals for inspiration in their robotic designs.
One of most talented and capable (and cutest) jumping animals is a fluffy little thing called a galago, or bushbaby. They live in Africa, weigh just a few kilos, and can leap tall (nearly two meter) bushes in a single bound. Part of the secret to this impressive jumping ability, which biologists only figured out a little over a decade ago, is that galagos use the structure of their legs to amplify the power of their muscles and tendons. In a paper just published in the (brand new!) journal Science Robotics, Haldane (along with M. M. Plecnik, J. K. Yim, and R. S. Fearing) demonstrate the jumping capability of a little 100g robot called Salto, which leverages the galago’s tricks into what has to be the most agile and impressive legged* jumping skill we’ve ever seen.
Useful motion through jumping is about more than just how high you can jump— it’s also about how frequently you can jump. For the purposes of this research, the term “agility” refers to how far upwards something can go while jumping over and over, or more technically, “the maximum achievable average vertical velocity of the jumping system while performing repeated jumps.” So, if you’re a galago, you can make jumps of 1.7m in height every 0.78s, giving you an agility of 2.2 m/s.
To be very agile, it’s not enough to be able to jump high: you also have to jump frequently. A robot like EPFL’s Jumper can make impressive vertical jumps of 1.3 meters, but it can only jump once every four seconds, giving it low agility. Minitaur, on the other hand, only jumps 0.48m, but it can do so every 0.43 second, giving it much higher agility despite its lower jumping height.
Increasing agility involves either jumping higher, jumping more frequently, or (usually) both. Galagos can jump high, but what makes them so agile is that they can jump high over and over again. Most robots that jump high have low agility, because (like EPFL’s Jumper) they have to spend time winding up a spring in order to store up enough energy to jump again, which kills their jump frequency. The Berkeley researchers wanted a robot that could match the galago in both jump height and frequency to achieve comparable agility, and they managed to get pretty darn close with Salto, which can make 1m hops every 0.58 seconds for an agility of 1.7 m/s.
The starting point for Salto’s jumping is common to many jumping robots: an elastic element, like a spring. In Salto’s case, the spring (which is a bit of rubber that can be twisted) is placed in series between a motor and the environment, resulting in a series elastic actuator (SEA). SEAs are nice because they help to protect the motor, allow for force control, let you recover some energy passively, and enable power modulation.
That last one is especially important: power modulation is a controlled (modulated) storing and releasing of power, and in the case of a jumping robot like Salto, it means that you can pump a bunch of energy into winding up a spring over a (relatively) long amount of time, and then release that energy over a (relatively) short amount of time. Many of the most successful jumping robots use elastic actuators to modulate how their actuators deliver power: by using a motor to wind up a spring, and then dumping all of that energy out of the spring at once to jump, robots can be much more powerful than if they were relying on the motor output alone.
Galagos have springs like this in the form of muscles and tendons, but what the Berkeley researchers implemented in Salto was something else that the galagos use to increase their jumping performance: a leg with variable mechanical advantage. The shape of a galago’s leg, and the technique that it uses to jump, allow it to output a staggering 15 times more power than its muscles can by themselves, and this kind of performance is the goal of Salto.
Mechanical advantage is what happens when you use a lever (like a crowbar) to convert a small amount of force and a large amount of motion into a large amount of force and a small amount of motion. What’s unique about Salto’s leg (and the legs of galagos and other jumping animals) is that its mechanical advantage is variable: when the leg is retracted (when the robot or animal is crouching on a surface), it has very low mechanical advantage. As the jumping motion begins, the mechanical advantage stays low as long as possible, and then rapidly increases as the leg extends in a jumping motion. Essentially, this slows down the takeoff part of the jump, giving the foot more time in contact with the surface. When the galago does this, Haldane calls it a “supercrouch.”
This mechanically-advantaged crouching adds 60 milliseconds to the amount of time that Salto spends in contact with a surface during the takeoff phase of a jump. It doesn’t sound like much, and you barely notice while watching the robot in action, but it more than doubles the time that Salto can transmit energy through its leg over a non-variable mechanically-advantaged design, which results in an increase in jumping power of nearly 3x. A Salto-like robot using only a series elastic actuator would be able to jump to 0.75m, while Salto itself (with its variable mechanical advantage leg) jumps to a full meter in height. This is what’s so cool about Salto: you get this massive boost to performance thanks purely to a very clever bio-inspired leg design. It’s not a galago yet, but it does do just as well as a bullfrog:
I guess the other thing that’s so cool about Salto is that it’s already doing Parkour— using a vertical surface that would otherwise be an obstacle to instead simultaneously increase its jump height and change direction. You’ve probably noticed that Salto doesn’t have a lot of sensing on it right now, and its jumping skills are all open-loop. In order to orient itself, it uses a rotary inertial tail, but it’s not (yet) able to adapt to different surfaces on its own.
The next things that the researchers will be working on include investigating new modes of locomotion, and of course chaining together multiple jumps, perhaps with integrated sensing. There’s also potential for adding another leg (or three) to see what happens, but at least in the near term, Haldane says he’s going to see how far he can get with the monopedal version of Salto.
It’s also worth mentioning that Salto’s variable mechanical advantage leg can be adapted to other legged robots that use SEAs, like StarlETH, ANYmal, or ATRIAS, and we’re very interested to see how this idea might improve the performance and efficiency of other platforms.
“Robotic Vertical Jumping Agility Via Series-Elastic Power Modulation,” by Duncan W. Haldane, M. M. Plecnik, J. K. Yim, and R. S. Fearing from UC Berkeley, was published today in the very first issue of Science Robotics.
[ UC Berkeley ]
- The researchers are, in general, comparing Salto to untethered, non-explosive jumpers. Using a tether for power means that you don’t have to worry nearly as much about efficiency, which is sort of cheating. And explosive jumpers (such as Sand Flea and these little rocket-jumpers) are certainly capable of some ridiculous jumping performance, but it’s difficult to compare their energy production to mechanical robots like Salto.
-
MIT's Modular Robotic Chain Is Whatever You Want It to Be
As sensors, computers, actuators, and batteries decrease in size and increase in efficiency, it becomes possible to make robots much smaller without sacrificing a whole lot of capability. There’s a lower limit on usefulness, however, if you’re making a robot that needs to interact with humans or human-scale objects. You can continue to leverage shrinking components if you make robots that are modular: in other words, big robots that are made up of lots of little robots.
In some ways, it’s more complicated to do this, because if one robot is complicated, n robots tend to be complicatedn. If you can get all of the communication and coordination figured out, though, a modular system offers tons of advantages: robots that come in any size you want, any configuration you want, and that are exceptionally easy to repair and reconfigure on the fly.
MIT’s ChainFORM is an interesting take on this idea: it’s an evolution of last year’s LineFORM multifunctional snake robot that introduces modularity to the system, letting you tear of a strip of exactly how much robot you need, and then reconfigure it to do all kinds of things.
MIT Media Lab calls ChainFORM a “shape changing interface,” because it comes from their Tangible Media Group, but if it came from a robotics group, it would be called a “poke-able modular snake robot with blinky lights.” Each ChainFORM module includes touch detection on multiple surfaces, angular detection, blinky lights, and motor actuation via a single servo motor. The trickiest bit is the communication architecture: MIT had to invent something that can automatically determine how many modules there are, and how the modules are connected to each other, while preserving the capability for real-time input and output. Since the relative position and orientation of each module is known at all times, you can do cool things like make a dynamically reconfigurable display that will continue to function (or adaptively change its function) even as you change the shape of the modules.
ChainFORM is not totally modular, in the sense that each module is not completely self-contained at this point: it’s tethered for power, and for overall control there’s a master board that interfaces with a computer over USB. The power tether also imposes a limit on the total number of modules that you can use at once because of the resistance of the connectors: no more than 32, unless you also connect power from the other end. The modules are still powerful, though: each can exert 0.8 kg/cm of torque, which is enough to move small things. It won’t move your limbs, but you’ll feel it trying, which makes it effective for haptic feedback applications, and able to support (and move) much of its own weight.
If it looks like ChainFORM has a lot of potential for useful improvements, that’s because ChainFORM has a lot of potential for useful improvements, according to the people who are developing useful improvements for it. They want to put displays on every surface, and increase their resolution. They want more joint configurations for connecting different modules and a way to split modules into different branches. And they want the modules to be able to self-assemble, like many modular robots are already able to do. The researchers also discuss things like adding different kinds of sensor modules and actuator modules, which would certainly increase the capability of the system as a whole without increasing the complexity of individual modules, but it would also make ChainFORM into more of a system of modules, which is (in my opinion) a bit less uniquely elegant than what ChainFORM is now.
“ChainFORM: A Linear Integrated Modular Hardware System for Shape Changing Interfaces,” by Ken Nakagaki, Artem Dementyev, Sean Follmer, Joseph A. Paradiso, and Hiroshi Ishii from the MIT Media Lab and Stanford University was presented at UIST 2016.