精华内容
下载资源
问答
  • ARM NEON

    2019-08-08 11:06:00
    ARM NEON 是适用于ARM Cortex-A和Cortex-R52系列处理器的一种128位SIMD(single instruction multiple data, 单指令多数据)扩展结构。 ARM CPU最开始只有普通的寄存器,可以进行基本数据类型的基本运算。自ARMv5...

    ARM NEON 是适用于ARM Cortex-A和Cortex-R52系列处理器的一种128位SIMD(single instruction multiple data, 单指令多数据)扩展结构。
            ARM CPU最开始只有普通的寄存器,可以进行基本数据类型的基本运算。自ARMv5开始引入了VFP(Vector Floating Point)指令,该指令用于向量化加速浮点运算。自ARMv7开始正式引入NEON指令,NEON性能远超VFP,因此VFP指令被废弃。类似于Intel CPU下的MMX/SSE/AVX/FMA指令,ARM CPU的NEON指令同样是通过向量化来进行速度优化。使用场景包含但不局限于:

     


    1. 灵活的视频转码(Flexible video transcoding)
    2. 声音识别、先进的语音处理(Speech Recognition,Advanced audio processing)
    3. 视频捕获增强(Enhanced captured video)
    4. 计算机视觉AR/VR(Computer Vision)
    5. 机器学习及深度学习(Machine and deep learning)
    6. 游戏及先进的人机交互界面(Gaming, Advanced user interface)

     

           关于SIMD(单指令多数据)和SISD(单指令单数据)。以加法指令为例,单指令单数据(SISD)的CPU对加法指令译码后,执行部件先访问内存,取得第一个操作数;之后再一次访问内存,取得第二个操作数;随后才能进行求和运算。而在SIMD型的CPU中,指令译码后几个执行部件同时访问内存,一次性获得所有操作数进行运算。这个特点使SIMD特别适合于多媒体应用等数据密集型运算:

     

                                                                      

           NEON技术是从ARMv7-A和ARMv7-R指令集引入的,目前已经扩展到了ARMv8-A和ARMv8-R指令集。
           NEON技术旨在通过加速多媒体(video/audio)编解码,用户界面,2D/3D图形及游戏来提高人对多媒体的体验度。
           NEON也可以通过加速信号处理算法和函数来加快应用程序,比如音频和视频处理,语音和面部识别,计算机视觉和深度学习。

     

    概述
           NEON是一个打包的SIMD架构,NEON寄存器被视为相同数据类型的向量元,并且支持多种数据类型的一种技术。
          


    NEON指令在所有向量通道上执行相同操作。执行操作数是由数据类型来决定的,Neon指令遵循如下的规则:

    16x8-bit, 8x16-bit,4x32-bit,2x64-bit         整形操作
    8x16-bit*,4x32-bit,2x64-bit**                      浮点操作

    多个并行指令的操作仅在ARMv8.2-A, ARMv8-A/R上被支持。

     

    怎样使用NEON
           有多种方式来使用NEON,其中包含了以使用的库文件,编译器的自动向量化,内嵌NEON代码等方式。

     

    Library

           使用NEON最简单的方式就是使用已经包含了NEON的开源库。

     

           ARM计算库用于机器学习和计算机视觉
           ARM计算库是基于ARM CPU和GPU架构且针对图像处理,计算机视觉和机器学习的低层次的函数集合。
           更多信息可参考:https://developer.arm.com/technologies/compute-library

     

           Ne10是一个基于C的开源库,由ARM托管在github上,包含了一系列在ARM上重度优化的最常用的增强处理过程。
           Ne10是有几个小型的库构成的模块化的开源库。




            Libyuv

            是一个包含YUV数据的转换和扩展功能的开源库.

            Skia

            是一个开源的2D图形库,用作谷歌Chrome和Chrome OS、Android、Mozilla Firefox和Firefox OS以及其他许多产品的图形引擎。

     

    Neon生态系统

            Neon在如下表格所示的领域内有广泛的使用。其中包含了很多跨平台的开源项目:

    Video Codecs Audio Codecs Voice and speech codecs Audio enhancement algorithms Computer Vision Machine and deep Learning  
    VP9 OTT encoder, VP9 Consumer encoder/decoder MP3 encoder/decoder G.711 Echo cancellation Canny Edge detection On-device object recognition
    H.264(AVC) encoder/decoder MPEG-2 layer I&II encoder/decoder G.722, G.722.1, G.722.2-A Noise Reduction Harris Corner On-device scene recognition
    MPEG4 SP/ASP encoder/decoder MPEG-1 layer III audio encoder G.723.1 Beam Forming ORB Human pose recognition
    MPEG2 decoder MPEG-1 layer III audio encoder/decoder G.726 Comfort Noise Convolution filter Defect detection
    H.263 decoder HE-AACv1, v2 encoder/decoder G.727 AudioZoom Erosion/Dilation  
      WMA Standard encoder/decoder G.728 Equalization Face detection  
      WMA Pro, WMA Lossless decoder G.729, G.729A, G.729B Wind noise reduction Pedestrian detection  
      SBC Bluetooth encoder/decoder G.729AB Audomatic Gain Control Fast9/Fast12 corner detection  
      OggVorbis encoder/decoder AMR Narrowband, Wideband, Wideband+ Voice Activity Detection Object tracking  
      FLAC encoder/decoder GSM-HR, GSM-ER, GSM-EFR Key word spotting Lane departure  
      Dolby Digital AC-3 encoder/decoder Opus Voice trigger Connected components  
      Dolby Digital eAC-3 decoder iLBC Voice biometrics    
      Dolby MS10/MS11 Multistream SILK Speeker verification    
      Dolby Digital Plus 5.1/7.1 Consumer decoder SPEEX      
      Dolby Digital 5.1 Creator Consumer encoder MELPe      
      Dolby Pro Logic I&II encoder/decoder        
      iSAC encoder/decoder        
      CELT encoder/decoder        
      DTS core encoder/decoder        
      DAB+ encoder/decoder        
      Dolby Mobile encoder/decoder        
      Dolby TrueHD consumer decoder        
      Dolby UDC encoder/decoder        
    ---------------------
    版权声明:本文为CSDN博主「rony2012」的原创文章,遵循CC 4.0 by-sa版权协议,转载请附上原文出处链接及本声明。
    原文链接:https://blog.csdn.net/rony2012/article/details/76433431

    转载于:https://www.cnblogs.com/wei-chen-linux/p/11319963.html

    展开全文
  • ARM NEON 编程系列8——ARM NEON 优化

    千次阅读 2017-08-22 17:43:50
    ...ARM NEON 优化 ...原贴:ARM NEON Optimization. An Example 将RGB图像转为灰度图,作者运用NEON优化,速度大幅提升。这里来学习一下如何使用NEON。 Since there is so little inf

    https://zhuanlan.zhihu.com/p/24702989


    ARM NEON 优化

    小鱼干小鱼干
    8 个月前
    原贴:ARM NEON Optimization. An Example

    将RGB图像转为灰度图,作者运用NEON优化,速度大幅提升。这里来学习一下如何使用NEON。

    Since there is so little information about NEON optimizations out there I thought I’d write a little about it.

    Some weeks ago someone on the beagle-board mailing-list asked how to optimize a color to grayscale conversion for images. I haven’t done much pixel processing with ARM NEON yet, so I gave if a try. The results I got where quite spectacular, but more on this later.

    For the color to grayscale conversion I used a very simple conversion scheme: A weighted average of the red, green and blue components. This conversion ignores the effect of gamma but works good enough in practice. Also I decided not to do proper rounding. It’s just an example after all.

    First a reference implementation in C:

    void reference_convert (uint8_t * __restrict dest, uint8_t * __restrict src, int n)
    {
      int i;
      for (i=0; i<n; i++)
      {
        int r = *src++; // load red
        int g = *src++; // load green
        int b = *src++; // load blue
        // build weighted average:
        int y = (r*77)+(g*151)+(b*28);
        // undo the scale by 256 and write to memory:
        *dest++ = (y>>8);
      }
    }
    

    Optimization with NEON Intrinsics
    Lets start optimizing the code using the compiler intrinsics. Intrinsics are nice to use because you they behave just like C-functions but compile to a single assembler statement. At least in theory as I’ll show you later..

    Since NEON works in 64 or 128 bit registers it’s best to process eight pixels in parallel. That way we can exploit the parallel nature of the SIMD-unit. Here is what I came up with:

    void neon_convert (uint8_t * __restrict dest, uint8_t * __restrict src, int n)
    {
      int i;
      uint8x8_t rfac = vdup_n_u8 (77);
      uint8x8_t gfac = vdup_n_u8 (151);
      uint8x8_t bfac = vdup_n_u8 (28);
      n/=8;
      for (i=0; i<n; i++)
      {
        uint16x8_t  temp;
        uint8x8x3_t rgb  = vld3_u8 (src);
        uint8x8_t result;
        temp = vmull_u8 (rgb.val[0],      rfac);
        temp = vmlal_u8 (temp,rgb.val[1], gfac);
        temp = vmlal_u8 (temp,rgb.val[2], bfac);
        result = vshrn_n_u16 (temp, 8);
        vst1_u8 (dest, result);
        src  += 8*3;
        dest += 8;
      }
    }
    

    Lets take a look at it step by step:

    First off I load my weight factors into three NEON registers. The vdup.8 instruction does this and also replicates the byte into all 8 bytes of the NEON register.

    读取8字节的预设值到64位寄存器

    uint8x8_t rfac = vdup_n_u8 (77);
    uint8x8_t gfac = vdup_n_u8 (151);
    uint8x8_t bfac = vdup_n_u8 (28);

    Now I load 8 pixels at once into three registers.

    一次读取3个unit8x8到3个64位寄存器
    uint8x8x3_t rgb = vld3_u8 (src);

    The vld3.8 instruction is a specialty of the NEON instruction set. With NEON you can not only do loads and stores of multiple registers at once, you can de-interleave the data on the fly as well. Since I expect my pixel data to be interleaved the vld3.8 instruction is a perfect fit for a tight loop.

    After the load, I have all the red components of 8 pixels in the first loaded register. The green components end up in the second and blue in the third.

    Now calculate the weighted average:


    temp = vmull_u8 (rgb.val[0], rfac);
    temp = vmlal_u8 (temp,rgb.val[1], gfac);
    temp = vmlal_u8 (temp,rgb.val[2], bfac);

    vmull.u8 multiplies each byte of the first argument with each corresponding byte of the second argument. Each result becomes a 16 bit unsigned integer, so no overflow can happen. The entire result is returned as a 128 bit NEON register pair.

    vmull_u8作用是:第一个参数的每个字节都与第二个参数的每个字节相乘,乘的结果是一个16位的无符号整数,最后的结果放到128位寄存器里,因此不会发生溢出,这也体现的NEON的强大之处。

    vmlal.u8 does the same thing as vmull.u8 but also adds the content of another register to the result.

    vmlal_u8在vmull_u8的基础上再加上第一个参数的值

    So we end up with just three instructions for weighted average of eight pixels. Nice.

    三个指令就实现了我们要的求加权平均值操作

    Now it’s time to undo the scaling of the weight factors. To do so I shift each 16 bit result to the right by 8 bits. This equals to a division by 256. ARM NEON has lots of instructions to do the shift, but also a “narrow” variant exists. This one does two things at once: It does the shift and afterwards converts the 16 bit integers back to 8 bit by removing all the high-bytes from the result. We get back from the 128 bit register pair to a single 64 bit register.

    vshrn_n_u16的作用是将第一个参数,即128位寄存器每16位右移第二个参数位,并将最终的结果存到一个64位寄存器里


    result = vshrn_n_u16 (temp, 8);

    And finally store the result.

    最后将结果存到dest;
    vst1_u8 (dest, result);

    First Results:
    How does the reference C-function and the NEON optimized version compare? I did a test on my Omap3 CortexA8 CPU on the beagle-board and got the following timings:

    C-version: 15.1 cycles per pixel.
    NEON-version: 9.9 cycles per pixel.

    That’s only a speed-up of factor 1.5. I expected much more from the NEON implementation. It processes 8 pixels with just 6 instructions after all. What’s going on here? A look at the assembler output explained it all. Here is the inner-loop part of the convert function:

    速度仅仅提升了1.5倍,看下汇编代码发生了什么:

     160:   f46a040f        vld3.8  {d16-d18}, [sl]
     164:   e1a0c005        mov     ip, r5
     168:   ecc80b06        vstmia  r8, {d16-d18}
     16c:   e1a04007        mov     r4, r7
     170:   e2866001        add     r6, r6, #1      ; 0x1
     174:   e28aa018        add     sl, sl, #24     ; 0x18
     178:   e8bc000f        ldm     ip!, {r0, r1, r2, r3}
     17c:   e15b0006        cmp     fp, r6
     180:   e1a08005        mov     r8, r5
     184:   e8a4000f        stmia   r4!, {r0, r1, r2, r3}
     188:   eddd0b06        vldr    d16, [sp, #24]
     18c:   e89c0003        ldm     ip, {r0, r1}
     190:   eddd2b08        vldr    d18, [sp, #32]
     194:   f3c00ca6        vmull.u8        q8, d16, d22
     198:   f3c208a5        vmlal.u8        q8, d18, d21
     19c:   e8840003        stm     r4, {r0, r1}
     1a0:   eddd3b0a        vldr    d19, [sp, #40]
     1a4:   f3c308a4        vmlal.u8        q8, d19, d20
     1a8:   f2c80830        vshrn.i16       d16, q8, #8
     1ac:   f449070f        vst1.8  {d16}, [r9]
     1b0:   e2899008        add     r9, r9, #8      ; 0x8
     1b4:   caffffe9        bgt     160
    

    Note the store at offset 168? The compiler decides to write the three registers onto the stack. After a bit of useless memory accesses from the GPP side the compiler reloads them (offset 188, 190 and 1a0) in exactly the same physical NEON register.

    What all the ordinary integer instructions do? I have no idea. Lots of memory accesses target the stack for no good reason. There is definitely no shortage of registers anywhere. For reference: I used the GCC 4.3.3 (CodeSourcery 2009q1 lite) compiler .

    NEON and assembler
    Since the compiler can’t generate good code I wrote the same loop in assembler. In a nutshell I just took the intrinsic based loop and converted the instructions one by one. The loop-control is a bit different, but that’s all.

    编译器没有生成高效的汇编代码,因此作者自己优化了一下代码
    convert_asm_neon:

     # r0: Ptr to destination data
          # r1: Ptr to source data
          # r2: Iteration count:
         push        {r4-r5,lr}
          lsr         r2, r2, #3
          # build the three constants:
          mov         r3, #77
          mov         r4, #151
          mov         r5, #28
          vdup.8      d3, r3
          vdup.8      d4, r4
          vdup.8      d5, r5
      .loop:
          # load 8 pixels:
          vld3.8      {d0-d2}, [r1]!
          # do the weight average:
          vmull.u8    q3, d0, d3
          vmlal.u8    q3, d1, d4
          vmlal.u8    q3, d2, d5
          # shift and store:
          vshrn.u16   d6, q3, #8
          vst1.8      {d6}, [r0]!
          subs        r2, r2, #1
          bne         .loop
          pop         { r4-r5, pc }
    

    Final Results:
    Time for some benchmarking again. How does the hand-written assembler version compares? Well – here are the results:

    C-version: 15.1 cycles per pixel.
    NEON-version: 9.9 cycles per pixel.
    Assembler: 2.0 cycles per pixel.That’s roughly a factor of five over the intrinsic version and 7.5 times faster than my not-so-bad C implementation. And keep in mind: I didn’t even optimized the assembler loop.

    手动优化后的汇编代码速度是C函数的7.5倍,编译器可能并不能很好的生成高效的汇编代码,这时候需要自己手动修改NEON部分的汇编代码。

    My conclusion: If you want performance out of your NEON unit stay away from the intrinsics. They are nice as a prototyping tool. Use them to get your algorithm working and then rewrite the NEON-parts of it in assembler.

    Btw: Sorry for the ugly syntax-highlighting. I’m still looking for a nice wordpress plug-in.

    如果看到更多NEON的类C指令,将添加到下面来:

    1. vdup_n_u8( src ):读取8字节到64位寄存器;

    2. vld3_u8 (src): 一次读取3个unit8x8到3个64位寄存器;

    3. vmull_u8(parm1, parm2): 第一个参数的每个字节都与第二个参数的每个字节相乘,乘的结果是一个16位的无符号整数,最后的结果放到128位寄存器里;

    4. vmlal_u8(add, parm1, parm2): 在vmull_u8的基础上再加上第一个参数的值;

    5. vshrn_n_u16(src, int): 将第一个参数,即128位寄存器每16位右移第二个参数位,并将最终的结果存到一个64位寄存器里


    gcc.gnu.org/onlinedocs/

    本文介绍了arm-linux编译器针对ARM处理器NEON的内建Intrinsics指令的介绍,当-mfpu=neon编译选项使能时就可以在C/C++代码内使用ARM的SIMD指令了,包括加法、乘法、比较、移位、绝对值 、极大极小极值运算、保存和加载指令等。

    有些指令看名字并不是很理解,下面的代码能帮助你理解其中的作用

    #ifndef __ARM_NEON__  
    #error You must enable NEON instructions (e.g. -mfloat-abi=softfp -mfpu=neon) to use arm_neon.h  
    #endif  
      
    /*(1)、正常指令:生成大小相同且类型通常与操作数向量相同的结果向量; 
    (2)、长指令:对双字向量操作数执行运算,生成四字向量的结果。所生成的元素一般是操作数元素宽度的两倍, 
    并属于同一类型; 
    (3)、宽指令:一个双字向量操作数和一个四字向量操作数执行运算,生成四字向量结果。所生成的元素和第一个 
    操作数的元素是第二个操作数元素宽度的两倍; 
    (4)、窄指令:四字向量操作数执行运算,并生成双字向量结果,所生成的元素一般是操作数元素宽度的一半; 
    (5)、饱和指令:当超过数据类型指定的范围则自动限制在该范围内。*/  
      
    /******************************************************Addition*************************/  
    /*--1、Vector add(正常指令): vadd -> ri = ai + bi; r, a, b have equal lane sizes--*/  
    int8x8_t vadd_s8 (int8x8_t __a, int8x8_t __b);//_mm_add_epi8  
    int16x4_t vadd_s16 (int16x4_t __a, int16x4_t __b);//_mm_add_epi16  
    int32x2_t vadd_s32 (int32x2_t __a, int32x2_t __b);//_mm_add_epi32  
    int64x1_t vadd_s64 (int64x1_t __a, int64x1_t __b);//_mm_add_epi64  
    //_mm_add_ps, SSE, use only low 64 bits  
    float32x2_t vadd_f32 (float32x2_t __a, float32x2_t __b);  
    uint8x8_t vadd_u8 (uint8x8_t __a, uint8x8_t __b);//_mm_add_epi8  
    uint16x4_t vadd_u16 (uint16x4_t __a, uint16x4_t __b);//_mm_add_epi16  
    uint32x2_t vadd_u32 (uint32x2_t __a, uint32x2_t __b);//_mm_add_epi32  
    uint64x1_t vadd_u64 (uint64x1_t __a, uint64x1_t __b);//_mm_add_epi64  
    int8x16_t vaddq_s8 (int8x16_t __a, int8x16_t __b);//_mm_add_epi8  
    int16x8_t vaddq_s16 (int16x8_t __a, int16x8_t __b);//_mm_add_epi16  
    int32x4_t vaddq_s32 (int32x4_t __a, int32x4_t __b);//_mm_add_epi32  
    int64x2_t vaddq_s64 (int64x2_t __a, int64x2_t __b);//_mm_add_epi64  
    float32x4_t vaddq_f32 (float32x4_t __a, float32x4_t __b);//_mm_add_ps  
    uint8x16_t vaddq_u8 (uint8x16_t __a, uint8x16_t __b);//_mm_add_epi8  
    uint16x8_t vaddq_u16 (uint16x8_t __a, uint16x8_t __b);//_mm_add_epi16  
    uint32x4_t vaddq_u32 (uint32x4_t __a, uint32x4_t __b);//_mm_add_epi32  
    uint64x2_t vaddq_u64 (uint64x2_t __a, uint64x2_t __b);//_mm_add_epi64  
    /*--2、Vector long add(长指令): vaddl -> ri = ai + bi; a, b have equal lane sizes,  
    result is a 128 bit vector of lanes that are twice the width--*/  
    int16x8_t vaddl_s8 (int8x8_t __a, int8x8_t __b);  
    int32x4_t vaddl_s16 (int16x4_t __a, int16x4_t __b);  
    int64x2_t vaddl_s32 (int32x2_t __a, int32x2_t __b);  
    uint16x8_t vaddl_u8 (uint8x8_t __a, uint8x8_t __b);  
    uint32x4_t vaddl_u16 (uint16x4_t __a, uint16x4_t __b);  
    uint64x2_t vaddl_u32 (uint32x2_t __a, uint32x2_t __b);  
    /*--3、Vector wide add(宽指令): vaddw -> ri = ai + bi--*/  
    int16x8_t vaddw_s8 (int16x8_t __a, int8x8_t __b);  
    int32x4_t vaddw_s16 (int32x4_t __a, int16x4_t __b);  
    int64x2_t vaddw_s32 (int64x2_t __a, int32x2_t __b);  
    uint16x8_t vaddw_u8 (uint16x8_t __a, uint8x8_t __b);  
    uint32x4_t vaddw_u16 (uint32x4_t __a, uint16x4_t __b);  
    uint64x2_t vaddw_u32 (uint64x2_t __a, uint32x2_t __b);  
    /*--4、Vector halving add: vhadd -> ri = (ai + bi) >> 1;  
    shifts each result right one bit, Results are truncated--*/  
    int8x8_t vhadd_s8 (int8x8_t __a, int8x8_t __b);  
    int16x4_t vhadd_s16 (int16x4_t __a, int16x4_t __b);  
    int32x2_t vhadd_s32 (int32x2_t __a, int32x2_t __b);  
    uint8x8_t vhadd_u8 (uint8x8_t __a, uint8x8_t __b);  
    uint16x4_t vhadd_u16 (uint16x4_t __a, uint16x4_t __b);  
    uint32x2_t vhadd_u32 (uint32x2_t __a, uint32x2_t __b);  
    int8x16_t vhaddq_s8 (int8x16_t __a, int8x16_t __b);  
    int16x8_t vhaddq_s16 (int16x8_t __a, int16x8_t __b)  
    int32x4_t vhaddq_s32 (int32x4_t __a, int32x4_t __b)  
    uint8x16_t vhaddq_u8 (uint8x16_t __a, uint8x16_t __b)  
    uint16x8_t vhaddq_u16 (uint16x8_t __a, uint16x8_t __b)  
    uint32x4_t vhaddq_u32 (uint32x4_t __a, uint32x4_t __b);  
    /*--5、Vector rounding halving add: vrhadd -> ri = (ai + bi + 1) >> 1;  
    shifts each result right one bit, Results are rounded(四舍五入)--*/  
    int8x8_t vrhadd_s8 (int8x8_t __a, int8x8_t __b);  
    int16x4_t vrhadd_s16 (int16x4_t __a, int16x4_t __b);  
    int32x2_t vrhadd_s32 (int32x2_t __a, int32x2_t __b);  
    uint8x8_t vrhadd_u8 (uint8x8_t __a, uint8x8_t __b);//_mm_avg_epu8  
    uint16x4_t vrhadd_u16 (uint16x4_t __a, uint16x4_t __b);//_mm_avg_epu16  
    uint32x2_t vrhadd_u32 (uint32x2_t __a, uint32x2_t __b);  
    int8x16_t vrhaddq_s8 (int8x16_t __a, int8x16_t __b);  
    int16x8_t vrhaddq_s16 (int16x8_t __a, int16x8_t __b);  
    int32x4_t vrhaddq_s32 (int32x4_t __a, int32x4_t __b);  
    uint8x16_t vrhaddq_u8 (uint8x16_t __a, uint8x16_t __b);//_mm_avg_epu8  
    uint16x8_t vrhaddq_u16 (uint16x8_t __a, uint16x8_t __b);//_mm_avg_epu16  
    uint32x4_t vrhaddq_u32 (uint32x4_t __a, uint32x4_t __b);  
    /*--6、Vector saturating add(饱和指令): vqadd -> ri = sat(ai + bi);  
    the results are saturated if they overflow--*/  
    int8x8_t vqadd_s8 (int8x8_t __a, int8x8_t __b);//_mm_adds_epi8  
    int16x4_t vqadd_s16 (int16x4_t __a, int16x4_t __b);//_mm_adds_epi16  
    int32x2_t vqadd_s32 (int32x2_t __a, int32x2_t __b);  
    int64x1_t vqadd_s64 (int64x1_t __a, int64x1_t __b);  
    uint8x8_t vqadd_u8 (uint8x8_t __a, uint8x8_t __b);//_mm_adds_epu8  
    uint16x4_t vqadd_u16 (uint16x4_t __a, uint16x4_t __b);//_mm_adds_epu16  
    uint32x2_t vqadd_u32 (uint32x2_t __a, uint32x2_t __b);  
    uint64x1_t vqadd_u64 (uint64x1_t __a, uint64x1_t __b);  
    int8x16_t vqaddq_s8 (int8x16_t __a, int8x16_t __b);//_mm_adds_epi8  
    int16x8_t vqaddq_s16 (int16x8_t __a, int16x8_t __b);//_mm_adds_epi16  
    int32x4_t vqaddq_s32 (int32x4_t __a, int32x4_t __b);  
    int64x2_t vqaddq_s64 (int64x2_t __a, int64x2_t __b);  
    uint8x16_t vqaddq_u8 (uint8x16_t __a, uint8x16_t __b);//_mm_adds_epu8  
    uint16x8_t vqaddq_u16 (uint16x8_t __a, uint16x8_t __b);//_mm_adds_epu16  
    uint32x4_t vqaddq_u32 (uint32x4_t __a, uint32x4_t __b);  
    uint64x2_t vqaddq_u64 (uint64x2_t __a, uint64x2_t __b);  
    /*--7、Vector add high half(窄指令): vaddhn -> ri = sat(ai + bi);  
    selecting High half, The results are truncated--*/  
    int8x8_t vaddhn_s16 (int16x8_t __a, int16x8_t __b);  
    int16x4_t vaddhn_s32 (int32x4_t __a, int32x4_t __b);  
    int32x2_t vaddhn_s64 (int64x2_t __a, int64x2_t __b);  
    uint8x8_t vaddhn_u16 (uint16x8_t __a, uint16x8_t __b);  
    uint16x4_t vaddhn_u32 (uint32x4_t __a, uint32x4_t __b);  
    uint32x2_t vaddhn_u64 (uint64x2_t __a, uint64x2_t __b);  
    /*--8、Vector rounding add high half(窄指令): vraddhn -> ri = ai + bi;  
    selecting High half, The results are rounded--*/  
    int8x8_t vraddhn_s16 (int16x8_t __a, int16x8_t __b);  
    int16x4_t vraddhn_s32 (int32x4_t __a, int32x4_t __b)  
    int32x2_t vraddhn_s64 (int64x2_t __a, int64x2_t __b)  
    uint8x8_t vraddhn_u16 (uint16x8_t __a, uint16x8_t __b)  
    uint16x4_t vraddhn_u32 (uint32x4_t __a, uint32x4_t __b)  
    uint32x2_t vraddhn_u64 (uint64x2_t __a, uint64x2_t __b);  
    /*******************************************Multiplication******************************/  
    /*--1、Vector multiply(正常指令): vmul -> ri = ai * bi;--*/  
    int8x8_t vmul_s8 (int8x8_t __a, int8x8_t __b);  
    int16x4_t vmul_s16 (int16x4_t __a, int16x4_t __b);//_mm_mullo_epi16  
    int32x2_t vmul_s32 (int32x2_t __a, int32x2_t __b);  
    float32x2_t vmul_f32 (float32x2_t __a, float32x2_t __b);//_mm_mul_ps  
    uint8x8_t vmul_u8 (uint8x8_t __a, uint8x8_t __b);  
    uint16x4_t vmul_u16 (uint16x4_t __a, uint16x4_t __b);//_mm_mullo_epi16  
    uint32x2_t vmul_u32 (uint32x2_t __a, uint32x2_t __b);  
    poly8x8_t vmul_p8 (poly8x8_t __a, poly8x8_t __b);  
    int8x16_t vmulq_s8 (int8x16_t __a, int8x16_t __b);  
    int16x8_t vmulq_s16 (int16x8_t __a, int16x8_t __b);//_mm_mullo_epi16  
    int32x4_t vmulq_s32 (int32x4_t __a, int32x4_t __b);  
    float32x4_t vmulq_f32 (float32x4_t __a, float32x4_t __b);//_mm_mul_ps  
    uint8x16_t vmulq_u8 (uint8x16_t __a, uint8x16_t __b);  
    uint16x8_t vmulq_u16 (uint16x8_t __a, uint16x8_t __b);//_mm_mullo_epi16  
    uint32x4_t vmulq_u32 (uint32x4_t __a, uint32x4_t __b);  
    poly8x16_t vmulq_p8 (poly8x16_t __a, poly8x16_t __b);  
    /*--2、Vector multiply accumulate: vmla -> ri = ai + bi * ci; --*/  
    int8x8_t vmla_s8 (int8x8_t __a, int8x8_t __b, int8x8_t __c);  
    int16x4_t vmla_s16 (int16x4_t __a, int16x4_t __b, int16x4_t __c);  
    int32x2_t vmla_s32 (int32x2_t __a, int32x2_t __b, int32x2_t __c);  
    float32x2_t vmla_f32 (float32x2_t __a, float32x2_t __b, float32x2_t __c);  
    uint8x8_t vmla_u8 (uint8x8_t __a, uint8x8_t __b, uint8x8_t __c);  
    uint16x4_t vmla_u16 (uint16x4_t __a, uint16x4_t __b, uint16x4_t __c);  
    uint32x2_t vmla_u32 (uint32x2_t __a, uint32x2_t __b, uint32x2_t __c);  
    int8x16_t vmlaq_s8 (int8x16_t __a, int8x16_t __b, int8x16_t __c);  
    int16x8_t vmlaq_s16 (int16x8_t __a, int16x8_t __b, int16x8_t __c);  
    int32x4_t vmlaq_s32 (int32x4_t __a, int32x4_t __b, int32x4_t __c);  
    float32x4_t vmlaq_f32 (float32x4_t __a, float32x4_t __b, float32x4_t __c);  
    uint8x16_t vmlaq_u8 (uint8x16_t __a, uint8x16_t __b, uint8x16_t __c);  
    uint16x8_t vmlaq_u16 (uint16x8_t __a, uint16x8_t __b, uint16x8_t __c);  
    uint32x4_t vmlaq_u32 (uint32x4_t __a, uint32x4_t __b, uint32x4_t __c);  
    /*--3、Vector multiply accumulate long: vmlal -> ri = ai + bi * ci --*/  
    int16x8_t vmlal_s8 (int16x8_t __a, int8x8_t __b, int8x8_t __c);  
    int32x4_t vmlal_s16 (int32x4_t __a, int16x4_t __b, int16x4_t __c);  
    int64x2_t vmlal_s32 (int64x2_t __a, int32x2_t __b, int32x2_t __c);  
    uint16x8_t vmlal_u8 (uint16x8_t __a, uint8x8_t __b, uint8x8_t __c);  
    uint32x4_t vmlal_u16 (uint32x4_t __a, uint16x4_t __b, uint16x4_t __c);  
    uint64x2_t vmlal_u32 (uint64x2_t __a, uint32x2_t __b, uint32x2_t __c);  
    /*--4、Vector multiply subtract: vmls -> ri = ai - bi * ci --*/  
    int8x8_t vmls_s8 (int8x8_t __a, int8x8_t __b, int8x8_t __c);  
    int16x4_t vmls_s16 (int16x4_t __a, int16x4_t __b, int16x4_t __c);  
    int32x2_t vmls_s32 (int32x2_t __a, int32x2_t __b, int32x2_t __c);  
    float32x2_t vmls_f32 (float32x2_t __a, float32x2_t __b, float32x2_t __c);  
    uint8x8_t vmls_u8 (uint8x8_t __a, uint8x8_t __b, uint8x8_t __c);  
    uint16x4_t vmls_u16 (uint16x4_t __a, uint16x4_t __b, uint16x4_t __c);  
    uint32x2_t vmls_u32 (uint32x2_t __a, uint32x2_t __b, uint32x2_t __c);  
    int8x16_t vmlsq_s8 (int8x16_t __a, int8x16_t __b, int8x16_t __c);  
    int16x8_t vmlsq_s16 (int16x8_t __a, int16x8_t __b, int16x8_t __c);  
    int32x4_t vmlsq_s32 (int32x4_t __a, int32x4_t __b, int32x4_t __c);  
    float32x4_t vmlsq_f32 (float32x4_t __a, float32x4_t __b, float32x4_t __c);  
    uint8x16_t vmlsq_u8 (uint8x16_t __a, uint8x16_t __b, uint8x16_t __c);  
    uint16x8_t vmlsq_u16 (uint16x8_t __a, uint16x8_t __b, uint16x8_t __c);  
    uint32x4_t vmlsq_u32 (uint32x4_t __a, uint32x4_t __b, uint32x4_t __c);  
    /*--5、Vector multiply subtract long:vmlsl -> ri = ai - bi * ci --*/  
    int16x8_t vmlsl_s8 (int16x8_t __a, int8x8_t __b, int8x8_t __c);  
    int32x4_t vmlsl_s16 (int32x4_t __a, int16x4_t __b, int16x4_t __c);  
    int64x2_t vmlsl_s32 (int64x2_t __a, int32x2_t __b, int32x2_t __c);  
    uint16x8_t vmlsl_u8 (uint16x8_t __a, uint8x8_t __b, uint8x8_t __c);  
    uint32x4_t vmlsl_u16 (uint32x4_t __a, uint16x4_t __b, uint16x4_t __c);  
    uint64x2_t vmlsl_u32 (uint64x2_t __a, uint32x2_t __b, uint32x2_t __c);  
    /*--6、Vector saturating doubling multiply high: vqdmulh -> ri = sat(ai * bi);  
    doubles the results and returns only the high half of the truncated results--*/  
    int16x4_t vqdmulh_s16 (int16x4_t __a, int16x4_t __b);  
    int32x2_t vqdmulh_s32 (int32x2_t __a, int32x2_t __b);  
    int16x8_t vqdmulhq_s16 (int16x8_t __a, int16x8_t __b);  
    int32x4_t vqdmulhq_s32 (int32x4_t __a, int32x4_t __b);  
    /*--7、Vector saturating rounding doubling multiply high vqrdmulh -> ri = ai * bi:  
    doubles the results and returns only the high half of the rounded results.  
    The results are saturated if they overflow--*/  
    int16x4_t vqrdmulh_s16 (int16x4_t __a, int16x4_t __b);  
    int32x2_t vqrdmulh_s32 (int32x2_t __a, int32x2_t __b);  
    int16x8_t vqrdmulhq_s16 (int16x8_t __a, int16x8_t __b);  
    int32x4_t vqrdmulhq_s32 (int32x4_t __a, int32x4_t __b);  
    /*--8、Vector saturating doubling multiply accumulate long: vqdmlal -> ri = ai + bi * ci; 
    multiplies the elements in the second and third vectors, doubles the results and adds the 
    results to the values in the first vector. The results are saturated if they overflow--*/  
    int32x4_t vqdmlal_s16 (int32x4_t __a, int16x4_t __b, int16x4_t __c);  
    int64x2_t  vqdmlal_s32 (int64x2_t __a, int32x2_t __b, int32x2_t __c);  
    /*--9、Vector saturating doubling multiply subtract long: vqdmlsl -> ri = ai - bi * ci; 
    multiplies the elements in the second and third vectors, doubles the results and subtracts  
    the results from the elements in the first vector.  
    The results are saturated if they overflow--*/  
    int32x4_t vqdmlsl_s16 (int32x4_t __a, int16x4_t __b, int16x4_t __c);  
    int64x2_t vqdmlsl_s32 (int64x2_t __a, int32x2_t __b, int32x2_t __c);  
    /*--10、Vector long multiply(长指令): vmull -> ri = ai * bi;--*/  
    int16x8_t vmull_s8 (int8x8_t __a, int8x8_t __b);  
    int32x4_t vmull_s16 (int16x4_t __a, int16x4_t __b);  
    int64x2_t vmull_s32 (int32x2_t __a, int32x2_t __b);  
    uint16x8_t vmull_u8 (uint8x8_t __a, uint8x8_t __b);  
    uint32x4_t vmull_u16 (uint16x4_t __a, uint16x4_t __b);  
    uint64x2_t vmull_u32 (uint32x2_t __a, uint32x2_t __b);  
    poly16x8_t vmull_p8 (poly8x8_t __a, poly8x8_t __b);  
    /*--11、Vector saturating doubling long multiply: vqdmull -> ri = ai * bi; 
    If any of the results overflow, they are saturated--*/  
    int32x4_t vqdmull_s16 (int16x4_t __a, int16x4_t __b);  
    int64x2_t vqdmull_s32 (int32x2_t __a, int32x2_t __b);  
    /*--12、Fused multiply accumulate: vfma -> ri = ai + bi * ci;  
    The result of the multiply is not rounded before the accumulation--*/  
    float32x2_t vfma_f32 (float32x2_t __a, float32x2_t __b, float32x2_t __c)  
    float32x4_t vfmaq_f32 (float32x4_t __a, float32x4_t __b, float32x4_t __c);  
    /*--13、Fused multiply subtract: vfms -> ri = ai - bi * ci;  
    The result of the multiply is not rounded before the subtraction--*/  
    float32x2_t vfms_f32 (float32x2_t __a, float32x2_t __b, float32x2_t __c);  
    float32x4_t vfmsq_f32 (float32x4_t __a, float32x4_t __b, float32x4_t __c);  
    /******************************************************Round to integral****************/  
    /*--1、to nearest, ties to even--*/  
    float32x2_t vrndn_f32 (float32x2_t __a);  
    float32x4_t vrndqn_f32 (float32x4_t __a);  
    /*--2、to nearest, ties away from zero--*/  
    float32x2_t vrnda_f32 (float32x2_t __a);  
    float32x4_t vrndqa_f32 (float32x4_t __a);  
    /*--3、towards +Inf--*/  
    float32x2_t vrndp_f32 (float32x2_t __a);  
    float32x4_t vrndqp_f32 (float32x4_t __a);  
    /*--4、towards -Inf--*/  
    float32x2_t vrndm_f32 (float32x2_t __a);  
    float32x4_t vrndqm_f32 (float32x4_t __a);  
    /*--5、towards 0--*/  
    float32x2_t vrnd_f32 (float32x2_t __a);  
    float32x4_t vrndq_f32 (float32x4_t __a);  
    /**********************************************Subtraction******************************/  
    /*--1、Vector subtract(正常指令):vsub -> ri = ai - bi;--*/  
    int8x8_t vsub_s8 (int8x8_t __a, int8x8_t __b);//_mm_sub_epi8  
    int16x4_t vsub_s16 (int16x4_t __a, int16x4_t __b);//_mm_sub_epi16  
    int32x2_t vsub_s32 (int32x2_t __a, int32x2_t __b);//_mm_sub_epi32  
    int64x1_t vsub_s64 (int64x1_t __a, int64x1_t __b);//_mm_sub_epi64  
    float32x2_t vsub_f32 (float32x2_t __a, float32x2_t __b);//_mm_sub_ps  
    uint8x8_t vsub_u8 (uint8x8_t __a, uint8x8_t __b);//_mm_sub_epi8  
    uint16x4_t vsub_u16 (uint16x4_t __a, uint16x4_t __b);//_mm_sub_epi16  
    uint32x2_t vsub_u32 (uint32x2_t __a, uint32x2_t __b);//_mm_sub_epi32  
    uint64x1_t vsub_u64 (uint64x1_t __a, uint64x1_t __b);//_mm_sub_epi64  
    int8x16_t vsubq_s8 (int8x16_t __a, int8x16_t __b);//_mm_sub_epi8  
    int16x8_t vsubq_s16 (int16x8_t __a, int16x8_t __b);//_mm_sub_epi16  
    int32x4_t vsubq_s32 (int32x4_t __a, int32x4_t __b);//_mm_sub_epi32  
    int64x2_t vsubq_s64 (int64x2_t __a, int64x2_t __b);//_mm_sub_epi64  
    float32x4_t vsubq_f32 (float32x4_t __a, float32x4_t __b);//_mm_sub_ps  
    uint8x16_t vsubq_u8 (uint8x16_t __a, uint8x16_t __b);//_mm_sub_epi8  
    uint16x8_t vsubq_u16 (uint16x8_t __a, uint16x8_t __b);//_mm_sub_epi16  
    uint32x4_t vsubq_u32 (uint32x4_t __a, uint32x4_t __b);//_mm_sub_epi32  
    uint64x2_t vsubq_u64 (uint64x2_t __a, uint64x2_t __b);//_mm_sub_epi64  
    /*--2、Vector long subtract(长指令): vsubl -> ri = ai - bi; --*/  
    int16x8_t vsubl_s8 (int8x8_t __a, int8x8_t __b);  
    int32x4_t vsubl_s16 (int16x4_t __a, int16x4_t __b);  
    int64x2_t vsubl_s32 (int32x2_t __a, int32x2_t __b);  
    uint16x8_t vsubl_u8 (uint8x8_t __a, uint8x8_t __b);  
    uint32x4_t vsubl_u16 (uint16x4_t __a, uint16x4_t __b);  
    uint64x2_t vsubl_u32 (uint32x2_t __a, uint32x2_t __b);  
    /*--3、Vector wide subtract(宽指令): vsubw -> ri = ai - bi;--*/  
    int16x8_t vsubw_s8 (int16x8_t __a, int8x8_t __b);  
    int32x4_t vsubw_s16 (int32x4_t __a, int16x4_t __b);  
    int64x2_t vsubw_s32 (int64x2_t __a, int32x2_t __b);  
    uint16x8_t vsubw_u8 (uint16x8_t __a, uint8x8_t __b);  
    uint32x4_t vsubw_u16 (uint32x4_t __a, uint16x4_t __b);  
    uint64x2_t vsubw_u32 (uint64x2_t __a, uint32x2_t __b);  
    /*--4、Vector saturating subtract(饱和指令): vqsub -> ri = sat(ai - bi); 
    If any of the results overflow, they are saturated--*/  
    int8x8_t vqsub_s8 (int8x8_t __a, int8x8_t __b);//_mm_subs_epi8  
    int16x4_t vqsub_s16 (int16x4_t __a, int16x4_t __b);//_mm_subs_epi16  
    int32x2_t vqsub_s32 (int32x2_t __a, int32x2_t __b);//_mm_subs_epi32  
    int64x1_t vqsub_s64 (int64x1_t __a, int64x1_t __b);  
    uint8x8_t vqsub_u8 (uint8x8_t __a, uint8x8_t __b);//_mm_subs_epu8  
    uint16x4_t vqsub_u16 (uint16x4_t __a, uint16x4_t __b);//_mm_subs_epu16  
    uint32x2_t vqsub_u32 (uint32x2_t __a, uint32x2_t __b);//_mm_subs_epu32  
    uint64x1_t vqsub_u64 (uint64x1_t __a, uint64x1_t __b);  
    int8x16_t vqsubq_s8 (int8x16_t __a, int8x16_t __b);//_mm_subs_epi8  
    int16x8_t vqsubq_s16 (int16x8_t __a, int16x8_t __b);//_mm_subs_epi16  
    int32x4_t vqsubq_s32 (int32x4_t __a, int32x4_t __b);//_mm_subs_epi32  
    int64x2_t vqsubq_s64 (int64x2_t __a, int64x2_t __b);  
    uint8x16_t vqsubq_u8 (uint8x16_t __a, uint8x16_t __b);//_mm_subs_epu8  
    uint16x8_t vqsubq_u16 (uint16x8_t __a, uint16x8_t __b);//_mm_subs_epu16  
    uint32x4_t vqsubq_u32 (uint32x4_t __a, uint32x4_t __b);//_mm_subs_epu32  
    uint64x2_t vqsubq_u64 (uint64x2_t __a, uint64x2_t __b);  
    /*--5、Vector halving subtract: vhsub -> ri = (ai - bi) >> 1;  
    shifts each result right one bit, The results are truncated.--*/  
    int8x8_t vhsub_s8 (int8x8_t __a, int8x8_t __b);  
    int16x4_t vhsub_s16 (int16x4_t __a, int16x4_t __b);  
    int32x2_t vhsub_s32 (int32x2_t __a, int32x2_t __b);  
    uint8x8_t vhsub_u8 (uint8x8_t __a, uint8x8_t __b);  
    uint16x4_t vhsub_u16 (uint16x4_t __a, uint16x4_t __b);  
    uint32x2_t vhsub_u32 (uint32x2_t __a, uint32x2_t __b);  
    int8x16_t vhsubq_s8 (int8x16_t __a, int8x16_t __b);  
    int16x8_t vhsubq_s16 (int16x8_t __a, int16x8_t __b);  
    int32x4_t vhsubq_s32 (int32x4_t __a, int32x4_t __b);  
    uint8x16_t vhsubq_u8 (uint8x16_t __a, uint8x16_t __b);  
    uint16x8_t vhsubq_u16 (uint16x8_t __a, uint16x8_t __b);  
    uint32x4_t vhsubq_u32 (uint32x4_t __a, uint32x4_t __b);  
    /*--6、Vector subtract high half(窄指令): vsubhn -> ri = ai - bi; 
    It returns the most significant halves of the results. The results are truncated--*/  
    int8x8_t vsubhn_s16 (int16x8_t __a, int16x8_t __b);  
    int16x4_t vsubhn_s32 (int32x4_t __a, int32x4_t __b);  
    int32x2_t vsubhn_s64 (int64x2_t __a, int64x2_t __b);  
    uint8x8_t vsubhn_u16 (uint16x8_t __a, uint16x8_t __b);  
    uint16x4_t vsubhn_u32 (uint32x4_t __a, uint32x4_t __b);  
    uint32x2_t vsubhn_u64 (uint64x2_t __a, uint64x2_t __b);  
    /*--7、Vector rounding subtract high half(窄指令): vrsubhn -> ai - bi;  
    It returns the most significant halves of the results. The results are rounded--*/  
    int8x8_t vrsubhn_s16 (int16x8_t __a, int16x8_t __b);  
    int16x4_t vrsubhn_s32 (int32x4_t __a, int32x4_t __b);  
    int32x2_t vrsubhn_s64 (int64x2_t __a, int64x2_t __b)  
    uint8x8_t vrsubhn_u16 (uint16x8_t __a, uint16x8_t __b);  
    uint16x4_t vrsubhn_u32 (uint32x4_t __a, uint32x4_t __b);  
    uint32x2_t vrsubhn_u64 (uint64x2_t __a, uint64x2_t __b);  
    /******************************************************Comparison***********************/  
    /*--1、Vector compare equal(正常指令): vceq -> ri = ai == bi ? 1...1 : 0...0;  
    If they are equal, the corresponding element in the destination vector is set to all ones. 
    Otherwise, it is set to all zeros--*/  
    uint8x8_t vceq_s8 (int8x8_t __a, int8x8_t __b);//_mm_cmpeq_epi8  
    uint16x4_t vceq_s16 (int16x4_t __a, int16x4_t __b);//_mm_cmpeq_epi16  
    uint32x2_t vceq_s32 (int32x2_t __a, int32x2_t __b);//_mm_cmpeq_epi32  
    uint32x2_t vceq_f32 (float32x2_t __a, float32x2_t __b);  
    uint8x8_t vceq_u8 (uint8x8_t __a, uint8x8_t __b);//_mm_cmpeq_epi8  
    uint16x4_t vceq_u16 (uint16x4_t __a, uint16x4_t __b);//_mm_cmpeq_epi16  
    uint32x2_t vceq_u32 (uint32x2_t __a, uint32x2_t __b);//_mm_cmpeq_epi32  
    uint8x8_t vceq_p8 (poly8x8_t __a, poly8x8_t __b);//_mm_cmpeq_epi8  
    uint8x16_t vceqq_s8 (int8x16_t __a, int8x16_t __b);//_mm_cmpeq_epi8  
    uint16x8_t vceqq_s16 (int16x8_t __a, int16x8_t __b);//_mm_cmpeq_epi16  
    uint32x4_t vceqq_s32 (int32x4_t __a, int32x4_t __b);//_mm_cmpeq_epi32  
    uint32x4_t vceqq_f32 (float32x4_t __a, float32x4_t __b);  
    uint8x16_t vceqq_u8 (uint8x16_t __a, uint8x16_t __b);//_mm_cmpeq_epi8  
    uint16x8_t vceqq_u16 (uint16x8_t __a, uint16x8_t __b);//_mm_cmpeq_epi16  
    uint32x4_t vceqq_u32 (uint32x4_t __a, uint32x4_t __b);//_mm_cmpeq_epi32  
    uint8x16_t vceqq_p8 (poly8x16_t __a, poly8x16_t __b);//_mm_cmpeq_epi8  
    /*--2、Vector compare greater-than or equal(正常指令): vcge-> ri = ai >= bi ? 1...1:0...0; 
    If it is greater than or equal to it, the corresponding element in the destination  
    vector is set to all ones. Otherwise, it is set to all zeros.--*/  
    uint8x8_t vcge_s8 (int8x8_t __a, int8x8_t __b);  
    uint16x4_t vcge_s16 (int16x4_t __a, int16x4_t __b);  
    uint32x2_t vcge_s32 (int32x2_t __a, int32x2_t __b);  
    uint32x2_t vcge_f32 (float32x2_t __a, float32x2_t __b);  
    uint8x8_t vcge_u8 (uint8x8_t __a, uint8x8_t __b);  
    uint16x4_t vcge_u16 (uint16x4_t __a, uint16x4_t __b);  
    uint32x2_t vcge_u32 (uint32x2_t __a, uint32x2_t __b);  
    uint8x16_t vcgeq_s8 (int8x16_t __a, int8x16_t __b);  
    uint16x8_t vcgeq_s16 (int16x8_t __a, int16x8_t __b);  
    uint32x4_t vcgeq_s32 (int32x4_t __a, int32x4_t __b);  
    uint32x4_t vcgeq_f32 (float32x4_t __a, float32x4_t __b);  
    uint8x16_t vcgeq_u8 (uint8x16_t __a, uint8x16_t __b);  
    uint16x8_t vcgeq_u16 (uint16x8_t __a, uint16x8_t __b);  
    uint32x4_t vcgeq_u32 (uint32x4_t __a, uint32x4_t __b);  
    /*--3、Vector compare less-than or equal(正常指令): vcle -> ri = ai <= bi ? 1...1:0...0; 
    If it is less than or equal to it, the corresponding element in the destination vector  
    is set to all ones. Otherwise, it is set to all zeros.--*/  
    uint8x8_t vcle_s8 (int8x8_t __a, int8x8_t __b);  
    uint16x4_t vcle_s16 (int16x4_t __a, int16x4_t __b);  
    uint32x2_t vcle_s32 (int32x2_t __a, int32x2_t __b);  
    uint32x2_t vcle_f32 (float32x2_t __a, float32x2_t __b);  
    uint8x8_t vcle_u8 (uint8x8_t __a, uint8x8_t __b);  
    uint16x4_t vcle_u16 (uint16x4_t __a, uint16x4_t __b);  
    uint32x2_t vcle_u32 (uint32x2_t __a, uint32x2_t __b);  
    uint8x16_t vcleq_s8 (int8x16_t __a, int8x16_t __b);  
    uint16x8_t vcleq_s16 (int16x8_t __a, int16x8_t __b);  
    uint32x4_t vcleq_s32 (int32x4_t __a, int32x4_t __b);  
    uint32x4_t vcleq_f32 (float32x4_t __a, float32x4_t __b);  
    uint8x16_t vcleq_u8 (uint8x16_t __a, uint8x16_t __b);  
    uint16x8_t vcleq_u16 (uint16x8_t __a, uint16x8_t __b);  
    uint32x4_t vcleq_u32 (uint32x4_t __a, uint32x4_t __b);  
    /*--4、Vector compare greater-than(正常指令): vcgt -> ri = ai > bi ? 1...1:0...0; 
    If it is greater than it, the corresponding element in the destination vector is 
    set to all ones. Otherwise, it is set to all zeros--*/  
    uint8x8_t vcgt_s8 (int8x8_t __a, int8x8_t __b);  
    uint16x4_t vcgt_s16 (int16x4_t __a, int16x4_t __b);  
    uint32x2_t vcgt_s32 (int32x2_t __a, int32x2_t __b);  
    uint32x2_t vcgt_f32 (float32x2_t __a, float32x2_t __b);  
    uint8x8_t vcgt_u8 (uint8x8_t __a, uint8x8_t __b);  
    uint16x4_t vcgt_u16 (uint16x4_t __a, uint16x4_t __b);  
    uint32x2_t vcgt_u32 (uint32x2_t __a, uint32x2_t __b);  
    uint8x16_t vcgtq_s8 (int8x16_t __a, int8x16_t __b);  
    uint16x8_t vcgtq_s16 (int16x8_t __a, int16x8_t __b);  
    uint32x4_t vcgtq_s32 (int32x4_t __a, int32x4_t __b);  
    uint32x4_t vcgtq_f32 (float32x4_t __a, float32x4_t __b);  
    uint8x16_t vcgtq_u8 (uint8x16_t __a, uint8x16_t __b);  
    uint16x8_t vcgtq_u16 (uint16x8_t __a, uint16x8_t __b);  
    uint32x4_t vcgtq_u32 (uint32x4_t __a, uint32x4_t __b);  
    /*--5、Vector compare less-than(正常指令): vclt -> ri = ai < bi ? 1...1:0...0; 
    If it is less than it, the corresponding element in the destination vector is set  
    to all ones.Otherwise, it is set to all zeros--*/  
    uint8x8_t vclt_s8 (int8x8_t __a, int8x8_t __b);  
    uint16x4_t vclt_s16 (int16x4_t __a, int16x4_t __b);  
    uint32x2_t vclt_s32 (int32x2_t __a, int32x2_t __b);  
    uint32x2_t vclt_f32 (float32x2_t __a, float32x2_t __b);  
    uint8x8_t vclt_u8 (uint8x8_t __a, uint8x8_t __b);  
    uint16x4_t vclt_u16 (uint16x4_t __a, uint16x4_t __b);  
    uint32x2_t vclt_u32 (uint32x2_t __a, uint32x2_t __b);  
    uint8x16_t vcltq_s8 (int8x16_t __a, int8x16_t __b);  
    uint16x8_t vcltq_s16 (int16x8_t __a, int16x8_t __b);  
    uint32x4_t vcltq_s32 (int32x4_t __a, int32x4_t __b);  
    uint32x4_t vcltq_f32 (float32x4_t __a, float32x4_t __b);  
    uint8x16_t vcltq_u8 (uint8x16_t __a, uint8x16_t __b);  
    uint16x8_t vcltq_u16 (uint16x8_t __a, uint16x8_t __b);  
    uint32x4_t vcltq_u32 (uint32x4_t __a, uint32x4_t __b);  
    /*--6、Vector compare absolute greater-than or equal(正常指令):  
    vcage -> ri = |ai| >= |bi| ? 1...1:0...0; 
    compares the absolute value of each element in a vector with the absolute value of the  
    corresponding element of a second vector. If it is greater than or equal to it,  
    the corresponding element in the destination vector is set to all ones. 
    Otherwise, it is set to all zeros.--*/  
    uint32x2_t vcage_f32 (float32x2_t __a, float32x2_t __b);  
    uint32x4_t vcageq_f32 (float32x4_t __a, float32x4_t __b);  
    /*--7、Vector compare absolute less-than or equal(正常指令): 
    vcale -> ri = |ai| <= |bi| ? 1...1:0...0; 
    compares the absolute value of each element in a vector with the absolute value of the  
    corresponding element of a second vector. If it is less than or equal to it,  
    the corresponding element in the destination vector is set to all ones. 
    Otherwise, it is set to all zeros--*/  
    uint32x2_t vcale_f32 (float32x2_t __a, float32x2_t __b);  
    uint32x4_t vcaleq_f32 (float32x4_t __a, float32x4_t __b);  
    /*--8、Vector compare absolute greater-than(正常指令): 
    vcage -> ri = |ai| > |bi| ? 1...1:0...0; 
    compares the absolute value of each element in a vector with the absolute value of the 
    corresponding element of a second vector. If it is greater than it,  
    the corresponding element in the destination vector is set to all ones.  
    Otherwise, it is set to all zeros.--*/  
    uint32x2_t vcagt_f32 (float32x2_t __a, float32x2_t __b);  
    uint32x4_t vcagtq_f32 (float32x4_t __a, float32x4_t __b);  
    /*--9、Vector compare absolute less-than(正常指令): 
    vcalt -> ri = |ai| < |bi| ? 1...1:0...0; 
    compares the absolute value of each element in a vector with the absolute value of the 
    corresponding element of a second vector.If it is less than it, the corresponding  
    element in the destination vector is set to all ones. Otherwise,it is set to all zeros--*/  
    uint32x2_t vcalt_f32 (float32x2_t __a, float32x2_t __b);  
    uint32x4_t vcaltq_f32 (float32x4_t __a, float32x4_t __b);  
    /**********************************************Vector test bits*************************/  
    /*--正常指令,vtst -> ri = (ai & bi != 0) ? 1...1:0...0; 
    bitwise logical ANDs each element in a vector with the corresponding element of a second  
    vector.If the result is not zero, the corresponding element in the destination vector  
    is set to all ones. Otherwise, it is set to all zeros--*/  
    uint8x8_t vtst_s8 (int8x8_t __a, int8x8_t __b);  
    uint16x4_t vtst_s16 (int16x4_t __a, int16x4_t __b);  
    uint32x2_t vtst_s32 (int32x2_t __a, int32x2_t __b);  
    uint8x8_t vtst_u8 (uint8x8_t __a, uint8x8_t __b);  
    uint16x4_t vtst_u16 (uint16x4_t __a, uint16x4_t __b);  
    uint32x2_t vtst_u32 (uint32x2_t __a, uint32x2_t __b);  
    uint8x8_t vtst_p8 (poly8x8_t __a, poly8x8_t __b);  
    uint8x16_t vtstq_s8 (int8x16_t __a, int8x16_t __b);  
    uint16x8_t vtstq_s16 (int16x8_t __a, int16x8_t __b);  
    uint32x4_t vtstq_s32 (int32x4_t __a, int32x4_t __b);  
    uint8x16_t vtstq_u8 (uint8x16_t __a, uint8x16_t __b);  
    uint16x8_t vtstq_u16 (uint16x8_t __a, uint16x8_t __b);  
    uint32x4_t vtstq_u32 (uint32x4_t __a, uint32x4_t __b);  
    uint8x16_t vtstq_p8 (poly8x16_t __a, poly8x16_t __b);  
    /**********************************************Absolute difference**********************/  
    /*--1、Absolute difference between the arguments(正常指令): vabd -> ri = |ai - bi|; 
    returns the absolute values of the results--*/  
    int8x8_t vabd_s8 (int8x8_t __a, int8x8_t __b);  
    int16x4_t vabd_s16 (int16x4_t __a, int16x4_t __b);  
    int32x2_t vabd_s32 (int32x2_t __a, int32x2_t __b);  
    float32x2_t vabd_f32 (float32x2_t __a, float32x2_t __b);  
    uint8x8_t vabd_u8 (uint8x8_t __a, uint8x8_t __b);  
    uint16x4_t vabd_u16 (uint16x4_t __a, uint16x4_t __b);  
    uint32x2_t vabd_u32 (uint32x2_t __a, uint32x2_t __b);  
    int8x16_t vabdq_s8 (int8x16_t __a, int8x16_t __b);  
    int16x8_t vabdq_s16 (int16x8_t __a, int16x8_t __b);  
    int32x4_t vabdq_s32 (int32x4_t __a, int32x4_t __b);  
    float32x4_t vabdq_f32 (float32x4_t __a, float32x4_t __b);  
    uint8x16_t vabdq_u8 (uint8x16_t __a, uint8x16_t __b);  
    uint16x8_t vabdq_u16 (uint16x8_t __a, uint16x8_t __b);  
    uint32x4_t vabdq_u32 (uint32x4_t __a, uint32x4_t __b);  
    /*--2、Absolute difference - long(长指令): vabdl -> ri = |ai - bi|;  
    The elements in the result vector are wider--*/  
    int16x8_t vabdl_s8 (int8x8_t __a, int8x8_t __b);  
    int32x4_t vabdl_s16 (int16x4_t __a, int16x4_t __b);  
    int64x2_t vabdl_s32 (int32x2_t __a, int32x2_t __b);  
    uint16x8_t vabdl_u8 (uint8x8_t __a, uint8x8_t __b);  
    uint32x4_t vabdl_u16 (uint16x4_t __a, uint16x4_t __b);  
    uint64x2_t vabdl_u32 (uint32x2_t __a, uint32x2_t __b);  
    /*--3、Absolute difference and accumulate: vaba -> ri = ai + |bi - ci|;--*/  
    int8x8_t vaba_s8 (int8x8_t __a, int8x8_t __b, int8x8_t __c);  
    int16x4_t vaba_s16 (int16x4_t __a, int16x4_t __b, int16x4_t __c);  
    int32x2_t vaba_s32 (int32x2_t __a, int32x2_t __b, int32x2_t __c);  
    uint8x8_t vaba_u8 (uint8x8_t __a, uint8x8_t __b, uint8x8_t __c);  
    uint16x4_t vaba_u16 (uint16x4_t __a, uint16x4_t __b, uint16x4_t __c);  
    uint32x2_t vaba_u32 (uint32x2_t __a, uint32x2_t __b, uint32x2_t __c);  
    int8x16_t vabaq_s8 (int8x16_t __a, int8x16_t __b, int8x16_t __c);  
    int16x8_t vabaq_s16 (int16x8_t __a, int16x8_t __b, int16x8_t __c);  
    int32x4_t vabaq_s32 (int32x4_t __a, int32x4_t __b, int32x4_t __c);  
    uint8x16_t vabaq_u8 (uint8x16_t __a, uint8x16_t __b, uint8x16_t __c);  
    uint16x8_t vabaq_u16 (uint16x8_t __a, uint16x8_t __b, uint16x8_t __c);  
    uint32x4_t vabaq_u32 (uint32x4_t __a, uint32x4_t __b, uint32x4_t __c);  
    /*--4、Absolute difference and accumulate - long: vabal -> ri = ai + |bi - ci|;  
    The elements in the result are wider--*/  
    int16x8_t vabal_s8 (int16x8_t __a, int8x8_t __b, int8x8_t __c);  
    int32x4_t vabal_s16 (int32x4_t __a, int16x4_t __b, int16x4_t __c);  
    int64x2_t vabal_s32 (int64x2_t __a, int32x2_t __b, int32x2_t __c);  
    uint16x8_t vabal_u8 (uint16x8_t __a, uint8x8_t __b, uint8x8_t __c);  
    uint32x4_t vabal_u16 (uint32x4_t __a, uint16x4_t __b, uint16x4_t __c);  
    uint64x2_t vabal_u32 (uint64x2_t __a, uint32x2_t __b, uint32x2_t __c);  
    /***********************************************Max*************************************/  
    /*--正常指令, vmax -> ri = ai >= bi ? ai : bi; returns the larger of each pair--*/  
    int8x8_t vmax_s8 (int8x8_t __a, int8x8_t __b);//_mm_max_epi8  
    int16x4_t vmax_s16 (int16x4_t __a, int16x4_t __b);//_mm_max_epi16  
    int32x2_t vmax_s32 (int32x2_t __a, int32x2_t __b);//_mm_max_epi32  
    float32x2_t vmax_f32 (float32x2_t __a, float32x2_t __b);//_mm_max_ps  
    uint8x8_t vmax_u8 (uint8x8_t __a, uint8x8_t __b);//_mm_max_epu8  
    uint16x4_t vmax_u16 (uint16x4_t __a, uint16x4_t __b);//_mm_max_epu16  
    uint32x2_t vmax_u32 (uint32x2_t __a, uint32x2_t __b);//_mm_max_epu32  
    int8x16_t vmaxq_s8 (int8x16_t __a, int8x16_t __b);//_mm_max_epi8  
    int16x8_t vmaxq_s16 (int16x8_t __a, int16x8_t __b);//_mm_max_epi16  
    int32x4_t vmaxq_s32 (int32x4_t __a, int32x4_t __b);//_mm_max_epi32  
    float32x4_t vmaxq_f32 (float32x4_t __a, float32x4_t __b);//_mm_max_ps  
    uint8x16_t vmaxq_u8 (uint8x16_t __a, uint8x16_t __b);//_mm_max_epu8  
    uint16x8_t vmaxq_u16 (uint16x8_t __a, uint16x8_t __b);//_mm_max_epu16  
    uint32x4_t vmaxq_u32 (uint32x4_t __a, uint32x4_t __b);//_mm_max_epu32  
    /****************************************************Min********************************/  
    /*--正常指令, vmin -> ri = ai >= bi ? bi : ai; returns the smaller of each pair--*/  
    int8x8_t vmin_s8 (int8x8_t __a, int8x8_t __b);//_mm_min_epi8  
    int16x4_t vmin_s16 (int16x4_t __a, int16x4_t __b);//_mm_min_epi16  
    int32x2_t vmin_s32 (int32x2_t __a, int32x2_t __b);//_mm_min_epi32  
    float32x2_t vmin_f32 (float32x2_t __a, float32x2_t __b);//_mm_min_ps  
    uint8x8_t vmin_u8 (uint8x8_t __a, uint8x8_t __b);//_mm_min_epu8  
    uint16x4_t vmin_u16 (uint16x4_t __a, uint16x4_t __b);//_mm_min_epu16  
    uint32x2_t vmin_u32 (uint32x2_t __a, uint32x2_t __b);//_mm_min_epu32  
    int8x16_t vminq_s8 (int8x16_t __a, int8x16_t __b);//_mm_min_epi8  
    int16x8_t vminq_s16 (int16x8_t __a, int16x8_t __b);//_mm_min_epi16  
    int32x4_t vminq_s32 (int32x4_t __a, int32x4_t __b);//_mm_min_epi32  
    float32x4_t vminq_f32 (float32x4_t __a, float32x4_t __b);//_mm_min_ps  
    uint8x16_t vminq_u8 (uint8x16_t __a, uint8x16_t __b);//_mm_min_epu8  
    uint16x8_t vminq_u16 (uint16x8_t __a, uint16x8_t __b);//_mm_min_epu16  
    uint32x4_t vminq_u32 (uint32x4_t __a, uint32x4_t __b);//_mm_min_epu32  
    /*******************************************Pairwise addition***************************/  
    /*--1、Pairwise add(正常指令):  
    vpadd -> r0 = a0 + a1, ..., r3 = a6 + a7, r4 = b0 + b1, ..., r7 = b6 + b7 
    adds adjacent pairs of elements of two vectors,  
    and places the results in the destination vector.--*/  
    //r0 = a0 + a1, ...,r3 = a6 + a7, r4 = b0 + b1, ...,r7 = b6 + b7  
    int8x8_t vpadd_s8 (int8x8_t __a, int8x8_t __b);  
    int16x4_t vpadd_s16 (int16x4_t __a, int16x4_t __b);  
    int32x2_t vpadd_s32 (int32x2_t __a, int32x2_t __b);  
    float32x2_t vpadd_f32 (float32x2_t __a, float32x2_t __b);  
    uint8x8_t vpadd_u8 (uint8x8_t __a, uint8x8_t __b);  
    uint16x4_t vpadd_u16 (uint16x4_t __a, uint16x4_t __b);  
    uint32x2_t vpadd_u32 (uint32x2_t __a, uint32x2_t __b);  
    /*--2、Long pairwise add: vpaddl vpaddl -> r0 = a0 + a1, ..., r3 = a6 + a7; 
    adds adjacent pairs of elements of a vector, sign extends or zero extends the results to  
    twice their original width, and places the final results in the destination vector--*/  
    int16x4_t vpaddl_s8 (int8x8_t __a);  
    int32x2_t vpaddl_s16 (int16x4_t __a);  
    int64x1_t vpaddl_s32 (int32x2_t __a);  
    uint16x4_t vpaddl_u8 (uint8x8_t __a);  
    uint32x2_t vpaddl_u16 (uint16x4_t __a);  
    uint64x1_t vpaddl_u32 (uint32x2_t __a);  
    int16x8_t vpaddlq_s8 (int8x16_t __a);  
    int32x4_t vpaddlq_s16 (int16x8_t __a);  
    int64x2_t vpaddlq_s32 (int32x4_t __a);  
    uint16x8_t vpaddlq_u8 (uint8x16_t __a);  
    uint32x4_t vpaddlq_u16 (uint16x8_t __a);  
    uint64x2_t vpaddlq_u32 (uint32x4_t __a);  
    /*--3、Long pairwise add and accumulate:  
    vpadal -> r0 = a0 + (b0 + b1), ..., r3 = a3 + (b6 + b7); 
    adds adjacent pairs of elements in the second vector, sign extends or zero extends the 
    results to twice the original width.  It then accumulates this with the corresponding  
    element in the first vector and places the final results in the destination vector--*/  
    int16x4_t vpadal_s8 (int16x4_t __a, int8x8_t __b);  
    int32x2_t vpadal_s16 (int32x2_t __a, int16x4_t __b);  
    int64x1_t vpadal_s32 (int64x1_t __a, int32x2_t __b);  
    uint16x4_t vpadal_u8 (uint16x4_t __a, uint8x8_t __b);  
    uint32x2_t vpadal_u16 (uint32x2_t __a, uint16x4_t __b);  
    uint64x1_t vpadal_u32 (uint64x1_t __a, uint32x2_t __b);  
    int16x8_t vpadalq_s8 (int16x8_t __a, int8x16_t __b);  
    int32x4_t vpadalq_s16 (int32x4_t __a, int16x8_t __b);  
    int64x2_t vpadalq_s32 (int64x2_t __a, int32x4_t __b);  
    uint16x8_t vpadalq_u8 (uint16x8_t __a, uint8x16_t __b);  
    uint32x4_t vpadalq_u16 (uint32x4_t __a, uint16x8_t __b);  
    uint64x2_t vpadalq_u32 (uint64x2_t __a, uint32x4_t __b);  
    /**********************************************Folding maximum**************************/  
    /*--饱和指令, vpmax -> vpmax r0 = a0 >= a1 ? a0 : a1, ..., r4 = b0 >= b1 ? b0 : b1, ...; 
    compares adjacent pairs of elements, and copies the larger of each pair into the  
    destination vector.The maximums from each pair of the first input vector are stored in  
    the lower half of the destination vector. The maximums from each pair of the second input  
    vector are stored in the higher half of the destination vector--*/  
    int8x8_t vpmax_s8 (int8x8_t __a, int8x8_t __b);  
    int16x4_t vpmax_s16 (int16x4_t __a, int16x4_t __b);  
    int32x2_t vpmax_s32 (int32x2_t __a, int32x2_t __b);  
    float32x2_t vpmax_f32 (float32x2_t __a, float32x2_t __b);  
    uint8x8_t vpmax_u8 (uint8x8_t __a, uint8x8_t __b);  
    uint16x4_t vpmax_u16 (uint16x4_t __a, uint16x4_t __b);  
    uint32x2_t vpmax_u32 (uint32x2_t __a, uint32x2_t __b);  
    /***************************************************Folding minimum*********************/  
    /*--饱和指令, vpmin -> r0 = a0 >= a1 ? a1 : a0, ..., r4 = b0 >= b1 ? b1 : b0, ...; 
    compares adjacent pairs of elements, and copies the smaller of each pair into the  
    destination vector.The minimums from each pair of the first input vector are stored in  
    the lower half of the destination vector. The minimums from each pair of the second  
    input vector are stored in the higher half of the destination vector.--*/  
    int8x8_t vpmin_s8 (int8x8_t __a, int8x8_t __b);  
    int16x4_t vpmin_s16 (int16x4_t __a, int16x4_t __b);  
    int32x2_t vpmin_s32 (int32x2_t __a, int32x2_t __b);  
    float32x2_t vpmin_f32 (float32x2_t __a, float32x2_t __b);  
    uint8x8_t vpmin_u8 (uint8x8_t __a, uint8x8_t __b);  
    uint16x4_t vpmin_u16 (uint16x4_t __a, uint16x4_t __b);  
    uint32x2_t vpmin_u32 (uint32x2_t __a, uint32x2_t __b);  
    /***************************************************Reciprocal**************************/  
    /*--1、饱和指令, Newton-Raphson iteration(牛顿 - 拉夫逊迭代) 
    performs a Newton-Raphson step for finding the reciprocal. It multiplies the elements of 
    one vector by the corresponding elements of another vector, subtracts each of the results 
    from 2, and places the final results into the elements of the destination vector--*/  
    float32x2_t vrecps_f32 (float32x2_t __a, float32x2_t __b);  
    float32x4_t vrecpsq_f32 (float32x4_t __a, float32x4_t __b);  
    /*--2、饱和指令,performs a Newton-Raphson step for finding the reciprocal square root.  
    It multiplies the elements of one vector by the corresponding elements of another vector,  
    subtracts each of the results from 3, divides these results by two, and places  
    the final results into the elements of the destination vector--*/  
    float32x2_t vrsqrts_f32 (float32x2_t __a, float32x2_t __b);  
    float32x4_t vrsqrtsq_f32 (float32x4_t __a, float32x4_t __b);  
    /************************************************Shifts by signed variable**************/  
    /*--1、Vector shift left(饱和指令): vshl -> ri = ai << bi; (negative values shift right) 
    left shifts each element in a vector by an amount specified in the corresponding element  
    in the second input vector. The shift amount is the signed integer value of the least  
    significant byte of the element in the second input vector. The bits shifted out of each 
    element are lost.If the signed integer value is negative, it results in a right shift--*/  
    int8x8_t vshl_s8 (int8x8_t __a, int8x8_t __b);  
    int16x4_t vshl_s16 (int16x4_t __a, int16x4_t __b);  
    int32x2_t vshl_s32 (int32x2_t __a, int32x2_t __b);  
    int64x1_t vshl_s64 (int64x1_t __a, int64x1_t __b);  
    uint8x8_t vshl_u8 (uint8x8_t __a, int8x8_t __b);  
    uint16x4_t vshl_u16 (uint16x4_t __a, int16x4_t __b);  
    uint32x2_t vshl_u32 (uint32x2_t __a, int32x2_t __b);  
    uint64x1_t vshl_u64 (uint64x1_t __a, int64x1_t __b);  
    int8x16_t vshlq_s8 (int8x16_t __a, int8x16_t __b);  
    int16x8_t vshlq_s16 (int16x8_t __a, int16x8_t __b);  
    int32x4_t vshlq_s32 (int32x4_t __a, int32x4_t __b);  
    int64x2_t vshlq_s64 (int64x2_t __a, int64x2_t __b);  
    uint8x16_t vshlq_u8 (uint8x16_t __a, int8x16_t __b);  
    uint16x8_t vshlq_u16 (uint16x8_t __a, int16x8_t __b);  
    uint32x4_t vshlq_u32 (uint32x4_t __a, int32x4_t __b);  
    uint64x2_t vshlq_u64 (uint64x2_t __a, int64x2_t __b);  
    /*--2、Vector saturating shift left(饱和指令):  
    vqshl -> ri = ai << bi;(negative values shift right) 
    If the shift value is positive, the operation is a left shift. Otherwise, it is a  
    truncating right shift. left shifts each element in a vector of integers and places 
    the results in the destination vector. It is similar to VSHL.  
    The difference is that the sticky QC flag is set if saturation occurs--*/  
    int8x8_t vqshl_s8 (int8x8_t __a, int8x8_t __b);  
    int16x4_t vqshl_s16 (int16x4_t __a, int16x4_t __b);  
    int32x2_t vqshl_s32 (int32x2_t __a, int32x2_t __b);  
    int64x1_t vqshl_s64 (int64x1_t __a, int64x1_t __b);  
    uint8x8_t vqshl_u8 (uint8x8_t __a, int8x8_t __b);  
    uint16x4_t vqshl_u16 (uint16x4_t __a, int16x4_t __b);  
    uint32x2_t vqshl_u32 (uint32x2_t __a, int32x2_t __b);  
    uint64x1_t vqshl_u64 (uint64x1_t __a, int64x1_t __b);  
    int8x16_t vqshlq_s8 (int8x16_t __a, int8x16_t __b);  
    int16x8_t vqshlq_s16 (int16x8_t __a, int16x8_t __b);  
    int32x4_t vqshlq_s32 (int32x4_t __a, int32x4_t __b);  
    int64x2_t vqshlq_s64 (int64x2_t __a, int64x2_t __b);  
    uint8x16_t vqshlq_u8 (uint8x16_t __a, int8x16_t __b);  
    uint16x8_t vqshlq_u16 (uint16x8_t __a, int16x8_t __b);  
    uint32x4_t vqshlq_u32 (uint32x4_t __a, int32x4_t __b);  
    uint64x2_t vqshlq_u64 (uint64x2_t __a, int64x2_t __b);  
    /*--3、Vector rounding shift left(饱和指令):  
    vrshl -> ri = ai << bi;(negative values shift right) 
    If the shift value is positive, the operation is a left shift. Otherwise, it is a 
    rounding right shift. left shifts each element in a vector of integers and places 
    the results in the destination vector. It is similar to VSHL.  
    The difference is that the shifted value is then rounded.--*/  
    int8x8_t vrshl_s8 (int8x8_t __a, int8x8_t __b);  
    int16x4_t vrshl_s16 (int16x4_t __a, int16x4_t __b);  
    int32x2_t vrshl_s32 (int32x2_t __a, int32x2_t __b);  
    int64x1_t vrshl_s64 (int64x1_t __a, int64x1_t __b);  
    uint8x8_t vrshl_u8 (uint8x8_t __a, int8x8_t __b);  
    uint16x4_t vrshl_u16 (uint16x4_t __a, int16x4_t __b);  
    uint32x2_t vrshl_u32 (uint32x2_t __a, int32x2_t __b);  
    uint64x1_t vrshl_u64 (uint64x1_t __a, int64x1_t __b);  
    int8x16_t vrshlq_s8 (int8x16_t __a, int8x16_t __b);  
    int16x8_t vrshlq_s16 (int16x8_t __a, int16x8_t __b);  
    int32x4_t vrshlq_s32 (int32x4_t __a, int32x4_t __b);  
    int64x2_t vrshlq_s64 (int64x2_t __a, int64x2_t __b);  
    uint8x16_t vrshlq_u8 (uint8x16_t __a, int8x16_t __b);  
    uint16x8_t vrshlq_u16 (uint16x8_t __a, int16x8_t __b);  
    uint32x4_t vrshlq_u32 (uint32x4_t __a, int32x4_t __b);  
    uint64x2_t vrshlq_u64 (uint64x2_t __a, int64x2_t __b);  
    /*--4、Vector saturating rounding shift left(饱和指令): 
    vqrshl -> ri = ai << bi;(negative values shift right) 
    left shifts each element in a vector of integers and places the results in the  
    destination vector.It is similar to VSHL. The difference is that the shifted value 
    is rounded, and the sticky QC flag is set if saturation occurs.--*/  
    int8x8_t vqrshl_s8 (int8x8_t __a, int8x8_t __b);  
    int16x4_t vqrshl_s16 (int16x4_t __a, int16x4_t __b);  
    int32x2_t vqrshl_s32 (int32x2_t __a, int32x2_t __b);  
    int64x1_t vqrshl_s64 (int64x1_t __a, int64x1_t __b);  
    uint8x8_t vqrshl_u8 (uint8x8_t __a, int8x8_t __b);  
    uint16x4_t vqrshl_u16 (uint16x4_t __a, int16x4_t __b);  
    uint32x2_t vqrshl_u32 (uint32x2_t __a, int32x2_t __b);  
    uint64x1_t vqrshl_u64 (uint64x1_t __a, int64x1_t __b);  
    int8x16_t vqrshlq_s8 (int8x16_t __a, int8x16_t __b);  
    int16x8_t vqrshlq_s16 (int16x8_t __a, int16x8_t __b);  
    int32x4_t vqrshlq_s32 (int32x4_t __a, int32x4_t __b);  
    int64x2_t vqrshlq_s64 (int64x2_t __a, int64x2_t __b);  
    uint8x16_t vqrshlq_u8 (uint8x16_t __a, int8x16_t __b);  
    uint16x8_t vqrshlq_u16 (uint16x8_t __a, int16x8_t __b);  
    uint32x4_t vqrshlq_u32 (uint32x4_t __a, int32x4_t __b);  
    uint64x2_t vqrshlq_u64 (uint64x2_t __a, int64x2_t __b);  
    /****************************************Shifts by a constant***************************/  
    /*--1、Vector shift right by constant: vshr -> ri = ai >> b;The results are truncated. 
    right shifts each element in a vector by an immediate value,  
    and places the results in the destination vector.--*/  
    int8x8_t vshr_n_s8 (int8x8_t __a, const int __b);  
    int16x4_t vshr_n_s16 (int16x4_t __a, const int __b);  
    int32x2_t vshr_n_s32 (int32x2_t __a, const int __b);  
    int64x1_t vshr_n_s64 (int64x1_t __a, const int __b);  
    uint8x8_t vshr_n_u8 (uint8x8_t __a, const int __b);  
    uint16x4_t vshr_n_u16 (uint16x4_t __a, const int __b);  
    uint32x2_t vshr_n_u32 (uint32x2_t __a, const int __b);  
    uint64x1_t vshr_n_u64 (uint64x1_t __a, const int __b);  
    int8x16_t vshrq_n_s8 (int8x16_t __a, const int __b);  
    int16x8_t vshrq_n_s16 (int16x8_t __a, const int __b);  
    int32x4_t vshrq_n_s32 (int32x4_t __a, const int __b);  
    int64x2_t vshrq_n_s64 (int64x2_t __a, const int __b);  
    uint8x16_t vshrq_n_u8 (uint8x16_t __a, const int __b);  
    uint16x8_t vshrq_n_u16 (uint16x8_t __a, const int __b);  
    uint32x4_t vshrq_n_u32 (uint32x4_t __a, const int __b);  
    uint64x2_t vshrq_n_u64 (uint64x2_t __a, const int __b);  
    /*--2、Vector shift left by constant: vshl -> ri = ai << b; 
    left shifts each element in a vector by an immediate value, and places the results in the  
    destination vector. The bits shifted out of the left of each element are lost--*/  
    int8x8_t vshl_n_s8 (int8x8_t __a, const int __b);  
    int16x4_t vshl_n_s16 (int16x4_t __a, const int __b);  
    int32x2_t vshl_n_s32 (int32x2_t __a, const int __b);  
    int64x1_t vshl_n_s64 (int64x1_t __a, const int __b);  
    uint8x8_t vshl_n_u8 (uint8x8_t __a, const int __b);  
    uint16x4_t vshl_n_u16 (uint16x4_t __a, const int __b);  
    uint32x2_t vshl_n_u32 (uint32x2_t __a, const int __b);  
    uint64x1_t vshl_n_u64 (uint64x1_t __a, const int __b);  
    int8x16_t vshlq_n_s8 (int8x16_t __a, const int __b);  
    int16x8_t vshlq_n_s16 (int16x8_t __a, const int __b);  
    int32x4_t vshlq_n_s32 (int32x4_t __a, const int __b);  
    int64x2_t vshlq_n_s64 (int64x2_t __a, const int __b);  
    uint8x16_t vshlq_n_u8 (uint8x16_t __a, const int __b);  
    uint16x8_t vshlq_n_u16 (uint16x8_t __a, const int __b);  
    uint32x4_t vshlq_n_u32 (uint32x4_t __a, const int __b);  
    uint64x2_t vshlq_n_u64 (uint64x2_t __a, const int __b);  
    /*--3、Vector rounding shift right by constant: vrshr -> ri = ai >> b; 
    right shifts each element in a vector by an immediate value, and places the results 
    in the destination vector. The shifted values are rounded.--*/  
    int8x8_t vrshr_n_s8 (int8x8_t __a, const int __b);  
    int16x4_t vrshr_n_s16 (int16x4_t __a, const int __b);  
    int32x2_t vrshr_n_s32 (int32x2_t __a, const int __b);  
    int64x1_t vrshr_n_s64 (int64x1_t __a, const int __b);  
    uint8x8_t vrshr_n_u8 (uint8x8_t __a, const int __b);  
    uint16x4_t vrshr_n_u16 (uint16x4_t __a, const int __b);  
    uint32x2_t vrshr_n_u32 (uint32x2_t __a, const int __b);  
    uint64x1_t vrshr_n_u64 (uint64x1_t __a, const int __b);  
    int8x16_t vrshrq_n_s8 (int8x16_t __a, const int __b);  
    int16x8_t vrshrq_n_s16 (int16x8_t __a, const int __b);  
    int32x4_t vrshrq_n_s32 (int32x4_t __a, const int __b);  
    int64x2_t vrshrq_n_s64 (int64x2_t __a, const int __b);  
    uint8x16_t vrshrq_n_u8 (uint8x16_t __a, const int __b);  
    uint16x8_t vrshrq_n_u16 (uint16x8_t __a, const int __b);  
    uint32x4_t vrshrq_n_u32 (uint32x4_t __a, const int __b);  
    uint64x2_t vrshrq_n_u64 (uint64x2_t __a, const int __b);  
    /*--4、Vector shift right by constant and accumulate: vsra -> ri = (ai >> c) + (bi >> c);  
    The results are truncated. right shifts each element in a vector by an immediate value,  
    and accumulates the results into the destination vector.--*/  
    int8x8_t vsra_n_s8 (int8x8_t __a, int8x8_t __b, const int __c);  
    int16x4_t vsra_n_s16 (int16x4_t __a, int16x4_t __b, const int __c);  
    int32x2_t vsra_n_s32 (int32x2_t __a, int32x2_t __b, const int __c);  
    int64x1_t vsra_n_s64 (int64x1_t __a, int64x1_t __b, const int __c);  
    uint8x8_t vsra_n_u8 (uint8x8_t __a, uint8x8_t __b, const int __c);  
    uint16x4_t vsra_n_u16 (uint16x4_t __a, uint16x4_t __b, const int __c);  
    uint32x2_t vsra_n_u32 (uint32x2_t __a, uint32x2_t __b, const int __c);  
    uint64x1_t vsra_n_u64 (uint64x1_t __a, uint64x1_t __b, const int __c);  
    int8x16_t vsraq_n_s8 (int8x16_t __a, int8x16_t __b, const int __c);  
    int16x8_t vsraq_n_s16 (int16x8_t __a, int16x8_t __b, const int __c);  
    int32x4_t vsraq_n_s32 (int32x4_t __a, int32x4_t __b, const int __c);  
    int64x2_t vsraq_n_s64 (int64x2_t __a, int64x2_t __b, const int __c);  
    uint8x16_t vsraq_n_u8 (uint8x16_t __a, uint8x16_t __b, const int __c);  
    uint16x8_t vsraq_n_u16 (uint16x8_t __a, uint16x8_t __b, const int __c);  
    uint32x4_t vsraq_n_u32 (uint32x4_t __a, uint32x4_t __b, const int __c);  
    uint64x2_t vsraq_n_u64 (uint64x2_t __a, uint64x2_t __b, const int __c);  
    /*--5、Vector rounding shift right by constant and accumulate:  
    vrsra -> ri = (ai >> c) + (bi >> c); 
    The results are rounded.right shifts each element in a vector by an immediate value,  
    and accumulates the rounded results into the destination vector.--*/  
    int8x8_t vrsra_n_s8 (int8x8_t __a, int8x8_t __b, const int __c);  
    int16x4_t vrsra_n_s16 (int16x4_t __a, int16x4_t __b, const int __c);  
    int32x2_t vrsra_n_s32 (int32x2_t __a, int32x2_t __b, const int __c);  
    int64x1_t vrsra_n_s64 (int64x1_t __a, int64x1_t __b, const int __c);  
    uint8x8_t vrsra_n_u8 (uint8x8_t __a, uint8x8_t __b, const int __c);  
    uint16x4_t vrsra_n_u16 (uint16x4_t __a, uint16x4_t __b, const int __c);  
    uint32x2_t vrsra_n_u32 (uint32x2_t __a, uint32x2_t __b, const int __c);  
    uint64x1_t vrsra_n_u64 (uint64x1_t __a, uint64x1_t __b, const int __c);  
    int8x16_t vrsraq_n_s8 (int8x16_t __a, int8x16_t __b, const int __c);  
    int16x8_t vrsraq_n_s16 (int16x8_t __a, int16x8_t __b, const int __c);  
    int32x4_t vrsraq_n_s32 (int32x4_t __a, int32x4_t __b, const int __c);  
    int64x2_t vrsraq_n_s64 (int64x2_t __a, int64x2_t __b, const int __c);  
    uint8x16_t vrsraq_n_u8 (uint8x16_t __a, uint8x16_t __b, const int __c);  
    uint16x8_t vrsraq_n_u16 (uint16x8_t __a, uint16x8_t __b, const int __c);  
    uint32x4_t vrsraq_n_u32 (uint32x4_t __a, uint32x4_t __b, const int __c);  
    uint64x2_t vrsraq_n_u64 (uint64x2_t __a, uint64x2_t __b, const int __c);  
    /*--6、Vector saturating shift left by constant: vqshl -> ri = sat(ai << b);  
    left shifts each element in a vector of integers by an immediate value, and places the  
    results in the destination vector,and the sticky QC flag is set if saturation occurs.--*/  
    int8x8_t vqshl_n_s8 (int8x8_t __a, const int __b);  
    int16x4_t vqshl_n_s16 (int16x4_t __a, const int __b);  
    int32x2_t vqshl_n_s32 (int32x2_t __a, const int __b);  
    int64x1_t vqshl_n_s64 (int64x1_t __a, const int __b);  
    uint8x8_t vqshl_n_u8 (uint8x8_t __a, const int __b);  
    uint16x4_t vqshl_n_u16 (uint16x4_t __a, const int __b);  
    uint32x2_t vqshl_n_u32 (uint32x2_t __a, const int __b);  
    uint64x1_t vqshl_n_u64 (uint64x1_t __a, const int __b);  
    int8x16_t vqshlq_n_s8 (int8x16_t __a, const int __b);  
    int16x8_t vqshlq_n_s16 (int16x8_t __a, const int __b);  
    int32x4_t vqshlq_n_s32 (int32x4_t __a, const int __b);  
    int64x2_t vqshlq_n_s64 (int64x2_t __a, const int __b);  
    uint8x16_t vqshlq_n_u8 (uint8x16_t __a, const int __b);  
    uint16x8_t vqshlq_n_u16 (uint16x8_t __a, const int __b);  
    uint32x4_t vqshlq_n_u32 (uint32x4_t __a, const int __b);  
    uint64x2_t vqshlq_n_u64 (uint64x2_t __a, const int __b);  
    /*--7、Vector signed->unsigned saturating shift left by constant: vqshlu -> ri = ai << b;  
    left shifts each element in a vector of integers by an immediate value, places the  
    results in the destination vector, the sticky QC flag is set if saturation occurs,  
    and indicates that the results are unsigned even though the operands are signed.--*/  
    uint8x8_t vqshlu_n_s8 (int8x8_t __a, const int __b);  
    uint16x4_t vqshlu_n_s16 (int16x4_t __a, const int __b);  
    uint32x2_t vqshlu_n_s32 (int32x2_t __a, const int __b);  
    uint64x1_t vqshlu_n_s64 (int64x1_t __a, const int __b);  
    uint8x16_t vqshluq_n_s8 (int8x16_t __a, const int __b);  
    uint16x8_t vqshluq_n_s16 (int16x8_t __a, const int __b);  
    uint32x4_t vqshluq_n_s32 (int32x4_t __a, const int __b);  
    uint64x2_t vqshluq_n_s64 (int64x2_t __a, const int __b);  
    /*--8、Vector narrowing shift right by constant: vshrn -> ri = ai >> b; 
    The results are truncated.right shifts each element in the input vector by an  
    immediate value. It then narrows the result by storing only the least significant 
    half of each element into the destination vector.--*/  
    int8x8_t vshrn_n_s16 (int16x8_t __a, const int __b);  
    int16x4_t vshrn_n_s32 (int32x4_t __a, const int __b);  
    int32x2_t vshrn_n_s64 (int64x2_t __a, const int __b);  
    uint8x8_t vshrn_n_u16 (uint16x8_t __a, const int __b);  
    uint16x4_t vshrn_n_u32 (uint32x4_t __a, const int __b);  
    uint32x2_t vshrn_n_u64 (uint64x2_t __a, const int __b);  
    /*--9、Vector signed->unsigned narrowing saturating shift right by constant:  
    vqshrun -> ri = ai >> b;  
    Results are truncated. right shifts each element in a quadword vector of integers by an 
    immediate value, and places the results in a doubleword vector. The results are unsigned,  
    although the operands are signed. The sticky QC flag is set if saturation occurs.--*/  
    uint8x8_t vqshrun_n_s16 (int16x8_t __a, const int __b);  
    uint16x4_t vqshrun_n_s32 (int32x4_t __a, const int __b);  
    uint32x2_t vqshrun_n_s64 (int64x2_t __a, const int __b);  
    /*--10、Vector signed->unsigned rounding narrowing saturating shift right by constant:  
    vqrshrun -> ri = ai >> b; Results are rounded. right shifts each element in a quadword  
    vector of integers by an immediate value, and places the rounded results in a doubleword  
    vector. The results are unsigned, although the operands are signed.--*/  
    uint8x8_t vqrshrun_n_s16 (int16x8_t __a, const int __b);  
    uint16x4_t vqrshrun_n_s32 (int32x4_t __a, const int __b);  
    uint32x2_t vqrshrun_n_s64 (int64x2_t __a, const int __b);  
    /*--11、Vector narrowing saturating shift right by constant: vqshrn -> ri = ai >> b;  
    Results are truncated. right shifts each element in a quadword vector of integers by an  
    immediate value, and places the results in a doubleword vector,  
    and the sticky QC flag is set if saturation occurs.--*/  
    int8x8_t vqshrn_n_s16 (int16x8_t __a, const int __b);  
    int16x4_t vqshrn_n_s32 (int32x4_t __a, const int __b);  
    int32x2_t vqshrn_n_s64 (int64x2_t __a, const int __b);  
    uint8x8_t vqshrn_n_u16 (uint16x8_t __a, const int __b);  
    uint16x4_t vqshrn_n_u32 (uint32x4_t __a, const int __b);  
    uint32x2_t vqshrn_n_u64 (uint64x2_t __a, const int __b);  
    /*--12、Vector rounding narrowing shift right by constant: vrshrn -> ri = ai >> b;  
    The results are rounded. right shifts each element in a vector by an immediate value, 
    and places the rounded,narrowed results in the destination vector.--*/  
    int8x8_t vrshrn_n_s16 (int16x8_t __a, const int __b);  
    int16x4_t vrshrn_n_s32 (int32x4_t __a, const int __b);  
    int32x2_t vrshrn_n_s64 (int64x2_t __a, const int __b);  
    uint8x8_t vrshrn_n_u16 (uint16x8_t __a, const int __b);  
    uint16x4_t vrshrn_n_u32 (uint32x4_t __a, const int __b);  
    uint32x2_t vrshrn_n_u64 (uint64x2_t __a, const int __b);  
    /*--13、Vector rounding narrowing saturating shift right by constant: 
    vqrshrn -> ri = ai >> b; 
    Results are rounded. right shifts each element in a quadword vector of integers by an  
    immediate value,and places the rounded,narrowed results in a doubleword vector.  
    The sticky QC flag is set if saturation occurs.--*/  
    int8x8_t vqrshrn_n_s16 (int16x8_t __a, const int __b);  
    int16x4_t vqrshrn_n_s32 (int32x4_t __a, const int __b);  
    int32x2_t vqrshrn_n_s64 (int64x2_t __a, const int __b);  
    uint8x8_t vqrshrn_n_u16 (uint16x8_t __a, const int __b);  
    uint16x4_t vqrshrn_n_u32 (uint32x4_t __a, const int __b);  
    uint32x2_t vqrshrn_n_u64 (uint64x2_t __a, const int __b);  
    /*--14、Vector widening shift left by constant: vshll -> ri = ai << b;  
    left shifts each element in a vector of integers by an immediate value,  
    and place the results in the destination vector. Bits shifted out of the left of each 
    element are lost and values are sign extended or zero extended.--*/  
    int16x8_t vshll_n_s8 (int8x8_t __a, const int __b);  
    int32x4_t vshll_n_s16 (int16x4_t __a, const int __b);  
    int64x2_t vshll_n_s32 (int32x2_t __a, const int __b);  
    uint16x8_t vshll_n_u8 (uint8x8_t __a, const int __b);  
    uint32x4_t vshll_n_u16 (uint16x4_t __a, const int __b);  
    uint64x2_t vshll_n_u32 (uint32x2_t __a, const int __b);  
    /********************************************Shifts with insert*************************/  
    /*--1、Vector shift right and insert: vsri -> ; The two most significant bits in the  
    destination vector are unchanged. right shifts each element in the second input vector  
    by an immediate value, and inserts the results in the destination vector. It does not  
    affect the highest n significant bits of the elements in the destination register. 
    Bits shifted out of the right of each element are lost.The first input vector holds 
    the elements of the destination vector before the operation is performed.--*/  
    int8x8_t vsri_n_s8 (int8x8_t __a, int8x8_t __b, const int __c);  
    int16x4_t vsri_n_s16 (int16x4_t __a, int16x4_t __b, const int __c);  
    int32x2_t vsri_n_s32 (int32x2_t __a, int32x2_t __b, const int __c);  
    int64x1_t vsri_n_s64 (int64x1_t __a, int64x1_t __b, const int __c);  
    uint8x8_t vsri_n_u8 (uint8x8_t __a, uint8x8_t __b, const int __c);  
    uint16x4_t vsri_n_u16 (uint16x4_t __a, uint16x4_t __b, const int __c);  
    uint32x2_t vsri_n_u32 (uint32x2_t __a, uint32x2_t __b, const int __c);  
    uint64x1_t vsri_n_u64 (uint64x1_t __a, uint64x1_t __b, const int __c);  
    poly8x8_t vsri_n_p8 (poly8x8_t __a, poly8x8_t __b, const int __c);  
    poly16x4_t vsri_n_p16 (poly16x4_t __a, poly16x4_t __b, const int __c);  
    int8x16_t vsriq_n_s8 (int8x16_t __a, int8x16_t __b, const int __c);  
    int16x8_t vsriq_n_s16 (int16x8_t __a, int16x8_t __b, const int __c);  
    int32x4_t vsriq_n_s32 (int32x4_t __a, int32x4_t __b, const int __c);  
    int64x2_t vsriq_n_s64 (int64x2_t __a, int64x2_t __b, const int __c);  
    uint8x16_t vsriq_n_u8 (uint8x16_t __a, uint8x16_t __b, const int __c);  
    uint16x8_t vsriq_n_u16 (uint16x8_t __a, uint16x8_t __b, const int __c);  
    uint32x4_t vsriq_n_u32 (uint32x4_t __a, uint32x4_t __b, const int __c);  
    uint64x2_t vsriq_n_u64 (uint64x2_t __a, uint64x2_t __b, const int __c);  
    poly8x16_t vsriq_n_p8 (poly8x16_t __a, poly8x16_t __b, const int __c);  
    poly16x8_t vsriq_n_p16 (poly16x8_t __a, poly16x8_t __b, const int __c);  
    /*--2、Vector shift left and insert: vsli ->; The least significant bit in each element 
    in the destination vector is unchanged. left shifts each element in the second input  
    vector by an immediate value, and inserts the results in the destination vector. 
    It does not affect the lowest n significant bits of the elements in the destination  
    register. Bits shifted out of the left of each element are lost. The first input vector 
    holds the elements of the destination vector before the operation is performed.--*/  
    int8x8_t vsli_n_s8 (int8x8_t __a, int8x8_t __b, const int __c);  
    int16x4_t vsli_n_s16 (int16x4_t __a, int16x4_t __b, const int __c);  
    int32x2_t vsli_n_s32 (int32x2_t __a, int32x2_t __b, const int __c);  
    int64x1_t vsli_n_s64 (int64x1_t __a, int64x1_t __b, const int __c);  
    uint8x8_t vsli_n_u8 (uint8x8_t __a, uint8x8_t __b, const int __c);  
    uint16x4_t vsli_n_u16 (uint16x4_t __a, uint16x4_t __b, const int __c);  
    uint32x2_t vsli_n_u32 (uint32x2_t __a, uint32x2_t __b, const int __c);  
    uint64x1_t vsli_n_u64 (uint64x1_t __a, uint64x1_t __b, const int __c);  
    poly8x8_t vsli_n_p8 (poly8x8_t __a, poly8x8_t __b, const int __c);  
    poly16x4_t vsli_n_p16 (poly16x4_t __a, poly16x4_t __b, const int __c);  
    int8x16_t vsliq_n_s8 (int8x16_t __a, int8x16_t __b, const int __c);  
    int16x8_t vsliq_n_s16 (int16x8_t __a, int16x8_t __b, const int __c);  
    int32x4_t vsliq_n_s32 (int32x4_t __a, int32x4_t __b, const int __c);  
    int64x2_t vsliq_n_s64 (int64x2_t __a, int64x2_t __b, const int __c);  
    uint8x16_t vsliq_n_u8 (uint8x16_t __a, uint8x16_t __b, const int __c);  
    uint16x8_t vsliq_n_u16 (uint16x8_t __a, uint16x8_t __b, const int __c);  
    uint32x4_t vsliq_n_u32 (uint32x4_t __a, uint32x4_t __b, const int __c);  
    uint64x2_t vsliq_n_u64 (uint64x2_t __a, uint64x2_t __b, const int __c);  
    poly8x16_t vsliq_n_p8 (poly8x16_t __a, poly8x16_t __b, const int __c);  
    poly16x8_t vsliq_n_p16 (poly16x8_t __a, poly16x8_t __b, const int __c);  
    /*****************************************Absolute value********************************/  
    /*--1、Absolute(正常指令): vabs -> ri = |ai|; 
    returns the absolute value of each element in a vector.--*/  
    int8x8_t vabs_s8 (int8x8_t __a);//_mm_abs_epi8  
    int16x4_t vabs_s16 (int16x4_t __a);//_mm_abs_epi16  
    int32x2_t vabs_s32 (int32x2_t __a);//_mm_abs_epi32  
    float32x2_t vabs_f32 (float32x2_t __a);  
    int8x16_t vabsq_s8 (int8x16_t __a);//_mm_abs_epi8  
    int16x8_t vabsq_s16 (int16x8_t __a);//_mm_abs_epi16  
    int32x4_t vabsq_s32 (int32x4_t __a);//_mm_abs_epi32  
    float32x4_t vabsq_f32 (float32x4_t __a);  
    /*--2、Saturating absolute(饱和指令): vqabs -> ri = sat(|ai|); 
    returns the absolute value of each element in a vector. If any of the results overflow, 
    they are saturated and the sticky QC flag is set.--*/  
    int8x8_t vqabs_s8 (int8x8_t __a);  
    int16x4_t vqabs_s16 (int16x4_t __a);  
    int32x2_t vqabs_s32 (int32x2_t __a);  
    int8x16_t vqabsq_s8 (int8x16_t __a);  
    int16x8_t vqabsq_s16 (int16x8_t __a);  
    int32x4_t vqabsq_s32 (int32x4_t __a);  
    /***************************************************Negation****************************/  
    /*--1、Negate(正常指令): vneg -> ri = -ai; negates each element in a vector.--*/  
    int8x8_t vneg_s8 (int8x8_t __a);  
    int16x4_t vneg_s16 (int16x4_t __a);  
    int32x2_t vneg_s32 (int32x2_t __a);  
    float32x2_t vneg_f32 (float32x2_t __a);  
    int8x16_t vnegq_s8 (int8x16_t __a);  
    int16x8_t vnegq_s16 (int16x8_t __a);  
    int32x4_t vnegq_s32 (int32x4_t __a);  
    float32x4_t vnegq_f32 (float32x4_t __a);  
    /*--2、Saturating Negate: vqneg -> ri = sat(-ai); 
    negates each element in a vector. If any of the results overflow,  
    they are saturated and the sticky QC flag is set.--*/  
    int8x8_t vqneg_s8 (int8x8_t __a);  
    int16x4_t vqneg_s16 (int16x4_t __a);  
    int32x2_t vqneg_s32 (int32x2_t __a);  
    int8x16_t vqnegq_s8 (int8x16_t __a);  
    int16x8_t vqnegq_s16 (int16x8_t __a);  
    int32x4_t vqnegq_s32 (int32x4_t __a);  
    /********************************************Logical operations*************************/  
    /*--1、Bitwise not(正常指令): vmvn -> ri = ~ai;  
    performs a bitwise inversion of each element from the input vector.--*/  
    int8x8_t vmvn_s8 (int8x8_t __a);  
    int16x4_t vmvn_s16 (int16x4_t __a);  
    int32x2_t vmvn_s32 (int32x2_t __a);  
    uint8x8_t vmvn_u8 (uint8x8_t __a);  
    uint16x4_t vmvn_u16 (uint16x4_t __a);  
    uint32x2_t vmvn_u32 (uint32x2_t __a);  
    poly8x8_t vmvn_p8 (poly8x8_t __a);  
    int8x16_t vmvnq_s8 (int8x16_t __a);  
    int16x8_t vmvnq_s16 (int16x8_t __a);  
    int32x4_t vmvnq_s32 (int32x4_t __a);  
    uint8x16_t vmvnq_u8 (uint8x16_t __a);  
    uint16x8_t vmvnq_u16 (uint16x8_t __a);  
    uint32x4_t vmvnq_u32 (uint32x4_t __a);  
    poly8x16_t vmvnq_p8 (poly8x16_t __a);  
    /*--2、Bitwise and(正常指令): vand -> ri = ai & bi; performs a bitwise AND between  
    corresponding elements of the input vectors.--*/  
    int8x8_t vand_s8 (int8x8_t __a, int8x8_t __b);//_mm_and_si128  
    int16x4_t vand_s16 (int16x4_t __a, int16x4_t __b);//_mm_and_si128  
    int32x2_t vand_s32 (int32x2_t __a, int32x2_t __b);//_mm_and_si128  
    uint8x8_t vand_u8 (uint8x8_t __a, uint8x8_t __b);//_mm_and_si128  
    uint16x4_t vand_u16 (uint16x4_t __a, uint16x4_t __b);//_mm_and_si128  
    uint32x2_t vand_u32 (uint32x2_t __a, uint32x2_t __b);//_mm_and_si128  
    int64x1_t vand_s64 (int64x1_t __a, int64x1_t __b);//_mm_and_si128  
    uint64x1_t vand_u64 (uint64x1_t __a, uint64x1_t __b);//_mm_and_si128  
    int8x16_t vandq_s8 (int8x16_t __a, int8x16_t __b);//_mm_and_si128  
    int16x8_t vandq_s16 (int16x8_t __a, int16x8_t __b);//_mm_and_si128  
    int32x4_t vandq_s32 (int32x4_t __a, int32x4_t __b);//_mm_and_si128  
    int64x2_t vandq_s64 (int64x2_t __a, int64x2_t __b);//_mm_and_si128  
    uint8x16_t vandq_u8 (uint8x16_t __a, uint8x16_t __b);//_mm_and_si128  
    uint16x8_t vandq_u16 (uint16x8_t __a, uint16x8_t __b);//_mm_and_si128  
    uint32x4_t vandq_u32 (uint32x4_t __a, uint32x4_t __b);//_mm_and_si128  
    uint64x2_t vandq_u64 (uint64x2_t __a, uint64x2_t __b);//_mm_and_si128  
    /*--3、Bitwise or(正常指令): vorr -> ri = ai | bi; performs a bitwise OR between 
    corresponding elements of the input vectors.--*/  
    int8x8_t vorr_s8 (int8x8_t __a, int8x8_t __b);//_mm_or_si128  
    int16x4_t vorr_s16 (int16x4_t __a, int16x4_t __b);//_mm_or_si128  
    int32x2_t vorr_s32 (int32x2_t __a, int32x2_t __b);//_mm_or_si128  
    uint8x8_t vorr_u8 (uint8x8_t __a, uint8x8_t __b);//_mm_or_si128  
    uint16x4_t vorr_u16 (uint16x4_t __a, uint16x4_t __b);//_mm_or_si128  
    uint32x2_t vorr_u32 (uint32x2_t __a, uint32x2_t __b);//_mm_or_si128  
    int64x1_t vorr_s64 (int64x1_t __a, int64x1_t __b);//_mm_or_si128  
    uint64x1_t vorr_u64 (uint64x1_t __a, uint64x1_t __b);//_mm_or_si128  
    int8x16_t vorrq_s8 (int8x16_t __a, int8x16_t __b);//_mm_or_si128  
    int16x8_t vorrq_s16 (int16x8_t __a, int16x8_t __b);//_mm_or_si128  
    int32x4_t vorrq_s32 (int32x4_t __a, int32x4_t __b);//_mm_or_si128  
    int64x2_t vorrq_s64 (int64x2_t __a, int64x2_t __b);//_mm_or_si128  
    uint8x16_t vorrq_u8 (uint8x16_t __a, uint8x16_t __b);//_mm_or_si128  
    uint16x8_t vorrq_u16 (uint16x8_t __a, uint16x8_t __b);//_mm_or_si128  
    uint32x4_t vorrq_u32 (uint32x4_t __a, uint32x4_t __b);//_mm_or_si128  
    uint64x2_t vorrq_u64 (uint64x2_t __a, uint64x2_t __b);//_mm_or_si128  
    /*--4、Bitwise exclusive or (EOR or XOR)(正常指令): veor -> ri = ai ^ bi;  
    performs a bitwise exclusive-OR between corresponding elements of the input vectors.--*/  
    int8x8_t veor_s8 (int8x8_t __a, int8x8_t __b);//_mm_xor_si128  
    int16x4_t veor_s16 (int16x4_t __a, int16x4_t __b);//_mm_xor_si128  
    int32x2_t veor_s32 (int32x2_t __a, int32x2_t __b);//_mm_xor_si128  
    uint8x8_t veor_u8 (uint8x8_t __a, uint8x8_t __b);//_mm_xor_si128  
    uint16x4_t veor_u16 (uint16x4_t __a, uint16x4_t __b);//_mm_xor_si128  
    uint32x2_t veor_u32 (uint32x2_t __a, uint32x2_t __b);//_mm_xor_si128  
    int64x1_t veor_s64 (int64x1_t __a, int64x1_t __b);//_mm_xor_si128  
    uint64x1_t veor_u64 (uint64x1_t __a, uint64x1_t __b);//_mm_xor_si128  
    int8x16_t veorq_s8 (int8x16_t __a, int8x16_t __b);//_mm_xor_si128  
    int16x8_t veorq_s16 (int16x8_t __a, int16x8_t __b);//_mm_xor_si128  
    int32x4_t veorq_s32 (int32x4_t __a, int32x4_t __b);//_mm_xor_si128  
    int64x2_t veorq_s64 (int64x2_t __a, int64x2_t __b);//_mm_xor_si128  
    uint8x16_t veorq_u8 (uint8x16_t __a, uint8x16_t __b);//_mm_xor_si128  
    uint16x8_t veorq_u16 (uint16x8_t __a, uint16x8_t __b);//_mm_xor_si128  
    uint32x4_t veorq_u32 (uint32x4_t __a, uint32x4_t __b);//_mm_xor_si128  
    uint64x2_t veorq_u64 (uint64x2_t __a, uint64x2_t __b);//_mm_xor_si128  
    /*--5、Bit Clear(正常指令): vbic -> ri = ~ai & bi; 
    VBIC (Vector Bitwise Clear) performs a bitwise logical AND complement operation between 
    values in two registers, and places the results in the destination register.--*/  
    int8x8_t vbic_s8 (int8x8_t __a, int8x8_t __b);//_mm_andnot_si128  
    int16x4_t vbic_s16 (int16x4_t __a, int16x4_t __b);//_mm_andnot_si128  
    int32x2_t vbic_s32 (int32x2_t __a, int32x2_t __b);//_mm_andnot_si128  
    uint8x8_t vbic_u8 (uint8x8_t __a, uint8x8_t __b);//_mm_andnot_si128  
    uint16x4_t vbic_u16 (uint16x4_t __a, uint16x4_t __b);//_mm_andnot_si128  
    uint32x2_t vbic_u32 (uint32x2_t __a, uint32x2_t __b);//_mm_andnot_si128  
    int64x1_t vbic_s64 (int64x1_t __a, int64x1_t __b);//_mm_andnot_si128  
    uint64x1_t vbic_u64 (uint64x1_t __a, uint64x1_t __b);//_mm_andnot_si128  
    int8x16_t vbicq_s8 (int8x16_t __a, int8x16_t __b);//_mm_andnot_si128  
    int16x8_t vbicq_s16 (int16x8_t __a, int16x8_t __b);//_mm_andnot_si128  
    int32x4_t vbicq_s32 (int32x4_t __a, int32x4_t __b);//_mm_andnot_si128  
    int64x2_t vbicq_s64 (int64x2_t __a, int64x2_t __b);//_mm_andnot_si128  
    uint8x16_t vbicq_u8 (uint8x16_t __a, uint8x16_t __b);//_mm_andnot_si128  
    uint16x8_t vbicq_u16 (uint16x8_t __a, uint16x8_t __b);//_mm_andnot_si128  
    uint32x4_t vbicq_u32 (uint32x4_t __a, uint32x4_t __b);//_mm_andnot_si128  
    uint64x2_t vbicq_u64 (uint64x2_t __a, uint64x2_t __b);//_mm_andnot_si128  
    /*--6、Bitwise OR complement(正常指令): vorn -> ri = ai | (~bi);  
    performs a bitwise logical OR NOT operation  
    between values in two registers, and places the results in the destination register.--*/  
    int8x8_t vorn_s8 (int8x8_t __a, int8x8_t __b);  
    int16x4_t vorn_s16 (int16x4_t __a, int16x4_t __b);  
    int32x2_t vorn_s32 (int32x2_t __a, int32x2_t __b);  
    uint8x8_t vorn_u8 (uint8x8_t __a, uint8x8_t __b);  
    uint16x4_t vorn_u16 (uint16x4_t __a, uint16x4_t __b);  
    uint32x2_t vorn_u32 (uint32x2_t __a, uint32x2_t __b);  
    int64x1_t vorn_s64 (int64x1_t __a, int64x1_t __b);  
    uint64x1_t vorn_u64 (uint64x1_t __a, uint64x1_t __b);  
    int8x16_t vornq_s8 (int8x16_t __a, int8x16_t __b);  
    int16x8_t vornq_s16 (int16x8_t __a, int16x8_t __b);  
    int32x4_t vornq_s32 (int32x4_t __a, int32x4_t __b);  
    int64x2_t vornq_s64 (int64x2_t __a, int64x2_t __b);  
    uint8x16_t vornq_u8 (uint8x16_t __a, uint8x16_t __b);  
    uint16x8_t vornq_u16 (uint16x8_t __a, uint16x8_t __b);  
    uint32x4_t vornq_u32 (uint32x4_t __a, uint32x4_t __b);  
    uint64x2_t vornq_u64 (uint64x2_t __a, uint64x2_t __b);  
    /****************************************Count leading sign bits************************/  
    /*--正常指令, vcls -> ; counts the number of consecutive bits, starting from the most  
    significant bit,that are the same as the most significant bit, in each element in a  
    vector, and places the count in the result vector.--*/  
    int8x8_t vcls_s8 (int8x8_t __a);  
    int16x4_t vcls_s16 (int16x4_t __a);  
    int32x2_t vcls_s32 (int32x2_t __a);  
    int8x16_t vclsq_s8 (int8x16_t __a);  
    int16x8_t vclsq_s16 (int16x8_t __a);  
    int32x4_t vclsq_s32 (int32x4_t __a);  
    /*******************************************Count leading zeros*************************/  
    /*--正常指令, vclz -> ; counts the number of consecutive zeros, starting from the most 
    significant bit, in each element in a vector, and places the count in result vector.--*/  
    int8x8_t vclz_s8 (int8x8_t __a);  
    int16x4_t vclz_s16 (int16x4_t __a);  
    int32x2_t vclz_s32 (int32x2_t __a);  
    uint8x8_t vclz_u8 (uint8x8_t __a);  
    uint16x4_t vclz_u16 (uint16x4_t __a);  
    uint32x2_t vclz_u32 (uint32x2_t __a);  
    int8x16_t vclzq_s8 (int8x16_t __a);  
    int16x8_t vclzq_s16 (int16x8_t __a);  
    int32x4_t vclzq_s32 (int32x4_t __a);  
    uint8x16_t vclzq_u8 (uint8x16_t __a);  
    uint16x8_t vclzq_u16 (uint16x8_t __a);  
    uint32x4_t vclzq_u32 (uint32x4_t __a);  
    /*******************************************Count number of set bits********************/  
    /*--正常指令, vcnt -> counts the number of bits that are one in each element in a vector,  
    and places the count in the result vector.--*/  
    int8x8_t vcnt_s8 (int8x8_t __a);  
    uint8x8_t vcnt_u8 (uint8x8_t __a);  
    poly8x8_t vcnt_p8 (poly8x8_t __a);  
    int8x16_t vcntq_s8 (int8x16_t __a);  
    uint8x16_t vcntq_u8 (uint8x16_t __a);  
    poly8x16_t vcntq_p8 (poly8x16_t __a);  
    /*****************************************Reciprocal estimate***************************/  
    /*--正常指令, vrecpe -> ; finds an approximate reciprocal of each element in a vector,  
    and places it in the result vector.--*/  
    float32x2_t vrecpe_f32 (float32x2_t __a);  
    uint32x2_t vrecpe_u32 (uint32x2_t __a);  
    float32x4_t vrecpeq_f32 (float32x4_t __a);  
    uint32x4_t vrecpeq_u32 (uint32x4_t __a);  
    /****************************************Reciprocal square-root estimate****************/  
    /*--正常指令, vrsqrte -> ; finds an approximate reciprocal square root of each element 
    in a vector, and places it in the return vector.--*/  
    float32x2_t vrsqrte_f32 (float32x2_t __a);  
    uint32x2_t vrsqrte_u32 (uint32x2_t __a);  
    float32x4_t vrsqrteq_f32 (float32x4_t __a);  
    uint32x4_t vrsqrteq_u32 (uint32x4_t __a);  
    /*******************************************Get lanes from a vector*********************/  
    /*--vmov -> r = a[b]; returns the value from the specified lane of a vector. 
    Extract lanes from a vector and put into a register.  
    These intrinsics extract a single lane (element) from a vector.--*/  
    int8_t vget_lane_s8 (int8x8_t __a, const int __b);//_mm_extract_epi8  
    int16_t vget_lane_s16 (int16x4_t __a, const int __b);//_mm_extract_epi16  
    int32_t vget_lane_s32 (int32x2_t __a, const int __b);//_mm_extract_epi32  
    float32_t vget_lane_f32 (float32x2_t __a, const int __b);  
    uint8_t vget_lane_u8 (uint8x8_t __a, const int __b);//_mm_extract_epi8  
    uint16_t vget_lane_u16 (uint16x4_t __a, const int __b);//_mm_extract_epi16  
    uint32_t vget_lane_u32 (uint32x2_t __a, const int __b);//_mm_extract_epi32  
    poly8_t vget_lane_p8 (poly8x8_t __a, const int __b);//_mm_extract_epi8  
    poly16_t vget_lane_p16 (poly16x4_t __a, const int __b);//_mm_extract_epi16  
    int64_t vget_lane_s64 (int64x1_t __a, const int __b);//_mm_extract_epi64  
    uint64_t vget_lane_u64 (uint64x1_t __a, const int __b);//_mm_extract_epi64  
    int8_t vgetq_lane_s8 (int8x16_t __a, const int __b);//_mm_extract_epi8  
    int16_t vgetq_lane_s16 (int16x8_t __a, const int __b);//_mm_extract_epi16  
    int32_t vgetq_lane_s32 (int32x4_t __a, const int __b);//_mm_extract_epi32  
    float32_t vgetq_lane_f32 (float32x4_t __a, const int __b);  
    uint8_t vgetq_lane_u8 (uint8x16_t __a, const int __b);//_mm_extract_epi8  
    uint16_t vgetq_lane_u16 (uint16x8_t __a, const int __b);//_mm_extract_epi16  
    uint32_t vgetq_lane_u32 (uint32x4_t __a, const int __b);//_mm_extract_epi32  
    poly8_t vgetq_lane_p8 (poly8x16_t __a, const int __b);//_mm_extract_epi8  
    poly16_t vgetq_lane_p16 (poly16x8_t __a, const int __b);//_mm_extract_epi16  
    int64_t vgetq_lane_s64 (int64x2_t __a, const int __b);//_mm_extract_epi64  
    uint64_t vgetq_lane_u64 (uint64x2_t __a, const int __b);//_mm_extract_epi64  
    /*********************************************Set lanes in a vector********************/  
    /*--vmov -> ; sets the value of the specified lane of a vector. It returns the vector  
    with the new value.Load a single lane of a vector from a literal. These intrinsics set  
    a single lane (element) within a vector.--*/  
    int8x8_t vset_lane_s8 (int8_t __a, int8x8_t __b, const int __c);  
    int16x4_t vset_lane_s16 (int16_t __a, int16x4_t __b, const int __c);  
    int32x2_t vset_lane_s32 (int32_t __a, int32x2_t __b, const int __c);  
    float32x2_t vset_lane_f32 (float32_t __a, float32x2_t __b, const int __c);  
    uint8x8_t vset_lane_u8 (uint8_t __a, uint8x8_t __b, const int __c);  
    uint16x4_t vset_lane_u16 (uint16_t __a, uint16x4_t __b, const int __c);  
    uint32x2_t vset_lane_u32 (uint32_t __a, uint32x2_t __b, const int __c);  
    poly8x8_t vset_lane_p8 (poly8_t __a, poly8x8_t __b, const int __c);  
    poly16x4_t vset_lane_p16 (poly16_t __a, poly16x4_t __b, const int __c);  
    int64x1_t vset_lane_s64 (int64_t __a, int64x1_t __b, const int __c);  
    uint64x1_t vset_lane_u64 (uint64_t __a, uint64x1_t __b, const int __c);  
    int8x16_t vsetq_lane_s8 (int8_t __a, int8x16_t __b, const int __c);  
    int16x8_t vsetq_lane_s16 (int16_t __a, int16x8_t __b, const int __c);  
    int32x4_t vsetq_lane_s32 (int32_t __a, int32x4_t __b, const int __c);  
    float32x4_t vsetq_lane_f32 (float32_t __a, float32x4_t __b, const int __c);  
    uint8x16_t vsetq_lane_u8 (uint8_t __a, uint8x16_t __b, const int __c);  
    uint16x8_t vsetq_lane_u16 (uint16_t __a, uint16x8_t __b, const int __c);  
    uint32x4_t vsetq_lane_u32 (uint32_t __a, uint32x4_t __b, const int __c);  
    poly8x16_t vsetq_lane_p8 (poly8_t __a, poly8x16_t __b, const int __c);  
    poly16x8_t vsetq_lane_p16 (poly16_t __a, poly16x8_t __b, const int __c);  
    int64x2_t vsetq_lane_s64 (int64_t __a, int64x2_t __b, const int __c);  
    uint64x2_t vsetq_lane_u64 (uint64_t __a, uint64x2_t __b, const int __c);  
    /****************************************Create vector from literal bit pattern*********/  
    /*--vmov -> ; creates a vector from a 64-bit pattern.  
    Initialize a vector from a literal bit pattern.--*/  
    int8x8_t vcreate_s8 (uint64_t __a);//_mm_loadl_epi64  
    int16x4_t vcreate_s16 (uint64_t __a);//_mm_loadl_epi64  
    int32x2_t vcreate_s32 (uint64_t __a);//_mm_loadl_epi64  
    int64x1_t vcreate_s64 (uint64_t __a);//_mm_loadl_epi64  
    float32x2_t vcreate_f32 (uint64_t __a);  
    uint8x8_t vcreate_u8 (uint64_t __a);//_mm_loadl_epi64  
    uint16x4_t vcreate_u16 (uint64_t __a);//_mm_loadl_epi64  
    uint32x2_t vcreate_u32 (uint64_t __a);//_mm_loadl_epi64  
    uint64x1_t vcreate_u64 (uint64_t __a);//_mm_loadl_epi64  
    poly8x8_t vcreate_p8 (uint64_t __a);//_mm_loadl_epi64  
    poly16x4_t vcreate_p16 (uint64_t __a);//_mm_loadl_epi64  
    /*****************************************Set all lanes to the same value***************/  
    /*--1、Load all lanes of vector to the same literal value: vdup/vmov -> ri = a;  
    duplicates a scalar into every element of the destination vector.  
    Load all lanes of vector to the same literal value--*/  
    int8x8_t vdup_n_s8 (int8_t __a);//_mm_set1_epi8  
    int16x4_t vdup_n_s16 (int16_t __a);//_mm_set1_epi16  
    int32x2_t vdup_n_s32 (int32_t __a);//_mm_set1_epi32  
    float32x2_t vdup_n_f32 (float32_t __a);//_mm_set1_ps  
    uint8x8_t vdup_n_u8 (uint8_t __a);//_mm_set1_epi8  
    uint16x4_t vdup_n_u16 (uint16_t __a);//_mm_set1_epi16  
    uint32x2_t vdup_n_u32 (uint32_t __a);//_mm_set1_epi32  
    poly8x8_t vdup_n_p8 (poly8_t __a);//_mm_set1_epi8  
    poly16x4_t vdup_n_p16 (poly16_t __a);//_mm_set1_epi16  
    int64x1_t vdup_n_s64 (int64_t __a);  
    uint64x1_t vdup_n_u64 (uint64_t __a);  
    int8x16_t vdupq_n_s8 (int8_t __a);//_mm_set1_epi8  
    int16x8_t vdupq_n_s16 (int16_t __a);//_mm_set1_epi16  
    int32x4_t vdupq_n_s32 (int32_t __a);//_mm_set1_epi32  
    float32x4_t vdupq_n_f32 (float32_t __a);//_mm_set1_ps  
    uint8x16_t vdupq_n_u8 (uint8_t __a);//_mm_set1_epi8  
    uint16x8_t vdupq_n_u16 (uint16_t __a);//_mm_set1_epi16  
    uint32x4_t vdupq_n_u32 (uint32_t __a);//_mm_set1_epi32  
    poly8x16_t vdupq_n_p8 (poly8_t __a);//_mm_set1_epi8  
    poly16x8_t vdupq_n_p16 (poly16_t __a);//_mm_set1_epi16  
    int64x2_t vdupq_n_s64 (int64_t __a);  
    uint64x2_t vdupq_n_u64 (uint64_t __a);  
    int8x8_t vmov_n_s8 (int8_t __a);//_mm_set1_epi8  
    int16x4_t vmov_n_s16 (int16_t __a);//_mm_set1_epi16  
    int32x2_t vmov_n_s32 (int32_t __a);//_mm_set1_epi32  
    float32x2_t vmov_n_f32 (float32_t __a);//_mm_set1_ps  
    uint8x8_t vmov_n_u8 (uint8_t __a);//_mm_set1_epi8  
    uint16x4_t vmov_n_u16 (uint16_t __a);//_mm_set1_epi16  
    uint32x2_t vmov_n_u32 (uint32_t __a);//_mm_set1_epi32  
    poly8x8_t vmov_n_p8 (poly8_t __a);//_mm_set1_epi8  
    poly16x4_t vmov_n_p16 (poly16_t __a);//_mm_set1_epi16  
    int64x1_t vmov_n_s64 (int64_t __a);  
    uint64x1_t vmov_n_u64 (uint64_t __a);  
    int8x16_t vmovq_n_s8 (int8_t __a);//_mm_set1_epi8  
    int16x8_t vmovq_n_s16 (int16_t __a);//_mm_set1_epi16  
    int32x4_t vmovq_n_s32 (int32_t __a);//_mm_set1_epi32  
    float32x4_t vmovq_n_f32 (float32_t __a);//_mm_set1_ps  
    uint8x16_t vmovq_n_u8 (uint8_t __a);//_mm_set1_epi8  
    uint16x8_t vmovq_n_u16 (uint16_t __a);//_mm_set1_epi16  
    uint32x4_t vmovq_n_u32 (uint32_t __a);//_mm_set1_epi32  
    poly8x16_t vmovq_n_p8 (poly8_t __a);//_mm_set1_epi8  
    poly16x8_t vmovq_n_p16 (poly16_t __a);//_mm_set1_epi16  
    int64x2_t vmovq_n_s64 (int64_t __a);  
    uint64x2_t vmovq_n_u64 (uint64_t __a);  
    /*--2、Load all lanes of the vector to the value of a lane of a vector:  
    vdup/vmov -> ri = a[b]; 
    duplicates a scalar into every element of the destination vector.--*/  
    int8x8_t vdup_lane_s8 (int8x8_t __a, const int __b);  
    int16x4_t vdup_lane_s16 (int16x4_t __a, const int __b);  
    int32x2_t vdup_lane_s32 (int32x2_t __a, const int __b);  
    float32x2_t vdup_lane_f32 (float32x2_t __a, const int __b);  
    uint8x8_t vdup_lane_u8 (uint8x8_t __a, const int __b);  
    uint16x4_t vdup_lane_u16 (uint16x4_t __a, const int __b);  
    uint32x2_t vdup_lane_u32 (uint32x2_t __a, const int __b);  
    poly8x8_t vdup_lane_p8 (poly8x8_t __a, const int __b);  
    poly16x4_t vdup_lane_p16 (poly16x4_t __a, const int __b);  
    int64x1_t vdup_lane_s64 (int64x1_t __a, const int __b);  
    uint64x1_t vdup_lane_u64 (uint64x1_t __a, const int __b);  
    int8x16_t vdupq_lane_s8 (int8x8_t __a, const int __b);  
    int16x8_t vdupq_lane_s16 (int16x4_t __a, const int __b);  
    int32x4_t vdupq_lane_s32 (int32x2_t __a, const int __b);  
    float32x4_t vdupq_lane_f32 (float32x2_t __a, const int __b);  
    uint8x16_t vdupq_lane_u8 (uint8x8_t __a, const int __b);  
    uint16x8_t vdupq_lane_u16 (uint16x4_t __a, const int __b);  
    uint32x4_t vdupq_lane_u32 (uint32x2_t __a, const int __b);  
    poly8x16_t vdupq_lane_p8 (poly8x8_t __a, const int __b);  
    poly16x8_t vdupq_lane_p16 (poly16x4_t __a, const int __b);  
    int64x2_t vdupq_lane_s64 (int64x1_t __a, const int __b);//_mm_unpacklo_epi64  
    uint64x2_t vdupq_lane_u64 (uint64x1_t __a, const int __b);//_mm_unpacklo_epi64  
    /********************************************Combining vectors**************************/  
    /*--长指令, -> r0 = a0, ..., r7 = a7, r8 = b0, ..., r15 = b7; 
    joins two 64-bit vectors into a single 128-bit vector.  
    The output vector contains twice the number of elements as each input vector.  
    The lower half of the output vector contains the elements of the first input vector.--*/  
    int8x16_t vcombine_s8 (int8x8_t __a, int8x8_t __b);//_mm_unpacklo_epi64  
    int16x8_t vcombine_s16 (int16x4_t __a, int16x4_t __b);//_mm_unpacklo_epi64  
    int32x4_t vcombine_s32 (int32x2_t __a, int32x2_t __b);//_mm_unpacklo_epi64  
    int64x2_t vcombine_s64 (int64x1_t __a, int64x1_t __b);//_mm_unpacklo_epi64  
    float32x4_t vcombine_f32 (float32x2_t __a, float32x2_t __b);  
    uint8x16_t vcombine_u8 (uint8x8_t __a, uint8x8_t __b);//_mm_unpacklo_epi64  
    uint16x8_t vcombine_u16 (uint16x4_t __a, uint16x4_t __b);//_mm_unpacklo_epi64  
    uint32x4_t vcombine_u32 (uint32x2_t __a, uint32x2_t __b);//_mm_unpacklo_epi64  
    uint64x2_t vcombine_u64 (uint64x1_t __a, uint64x1_t __b);//_mm_unpacklo_epi64  
    poly8x16_t vcombine_p8 (poly8x8_t __a, poly8x8_t __b);//_mm_unpacklo_epi64  
    poly16x8_t vcombine_p16 (poly16x4_t __a, poly16x4_t __b);//_mm_unpacklo_epi64  
    /***************************************Splitting vectors*******************************/  
    /*--1、窄指令, -> ri = a(i+4); returns the higher half of the 128-bit input vector. The 
    output is a 64-bit vector that has half the number of elements as the input vector.--*/  
    int8x8_t vget_high_s8 (int8x16_t __a);//_mm_unpackhi_epi64  
    int16x4_t vget_high_s16 (int16x8_t __a);//_mm_unpackhi_epi64  
    int32x2_t vget_high_s32 (int32x4_t __a);//_mm_unpackhi_epi64  
    int64x1_t vget_high_s64 (int64x2_t __a);//_mm_unpackhi_epi64  
    float32x2_t vget_high_f32 (float32x4_t __a);  
    uint8x8_t vget_high_u8 (uint8x16_t __a);//_mm_unpackhi_epi64  
    uint16x4_t vget_high_u16 (uint16x8_t __a);//_mm_unpackhi_epi64  
    uint32x2_t vget_high_u32 (uint32x4_t __a);//_mm_unpackhi_epi64  
    uint64x1_t vget_high_u64 (uint64x2_t __a);//_mm_unpackhi_epi64  
    poly8x8_t vget_high_p8 (poly8x16_t __a);//_mm_unpackhi_epi64  
    poly16x4_t vget_high_p16 (poly16x8_t __a);//_mm_unpackhi_epi64  
    /*--2、窄指令, -> ri = ai; returns the lower half of the 128-bit input vector. The 
    output is a 64-bit vector that has half the number of elements as the input vector.--*/  
    int8x8_t vget_low_s8 (int8x16_t __a);  
    int16x4_t vget_low_s16 (int16x8_t __a);  
    int32x2_t vget_low_s32 (int32x4_t __a);  
    float32x2_t vget_low_f32 (float32x4_t __a);  
    uint8x8_t vget_low_u8 (uint8x16_t __a);  
    uint16x4_t vget_low_u16 (uint16x8_t __a);  
    uint32x2_t vget_low_u32 (uint32x4_t __a);  
    poly8x8_t vget_low_p8 (poly8x16_t __a);  
    poly16x4_t vget_low_p16 (poly16x8_t __a);  
    int64x1_t vget_low_s64 (int64x2_t __a);  
    uint64x1_t vget_low_u64 (uint64x2_t __a);  
    /****************************************************Conversions************************/  
    /*--1、Convert from float: vcvt ->, convert from floating-point to integer.--*/  
    int32x2_t vcvt_s32_f32 (float32x2_t __a);  
    uint32x2_t vcvt_u32_f32 (float32x2_t __a);  
    int32x4_t vcvtq_s32_f32 (float32x4_t __a);  
    uint32x4_t vcvtq_u32_f32 (float32x4_t __a);  
    int32x2_t vcvt_n_s32_f32 (float32x2_t __a, const int __b);  
    uint32x2_t vcvt_n_u32_f32 (float32x2_t __a, const int __b);  
    int32x4_t vcvtq_n_s32_f32 (float32x4_t __a, const int __b);  
    uint32x4_t vcvtq_n_u32_f32 (float32x4_t __a, const int __b);  
    /*--2、Convert to float: vcvt ->, convert from integer to floating-point.--*/  
    float32x2_t vcvt_f32_s32 (int32x2_t __a);  
    float32x2_t vcvt_f32_u32 (uint32x2_t __a);  
    float32x4_t vcvtq_f32_s32 (int32x4_t __a);  
    float32x4_t vcvtq_f32_u32 (uint32x4_t __a);  
    float32x2_t vcvt_n_f32_s32 (int32x2_t __a, const int __b);  
    float32x2_t vcvt_n_f32_u32 (uint32x2_t __a, const int __b);  
    float32x4_t vcvtq_n_f32_s32 (int32x4_t __a, const int __b);  
    float32x4_t vcvtq_n_f32_u32 (uint32x4_t __a, const int __b);  
    /*--3、between single-precision and double-precision numbers: vcvt ->--*/  
    float16x4_t vcvt_f16_f32(float32x4_t a);  
    float32x4_t vcvt_f32_f16(float16x4_t a);  
    /*************************************************Move**********************************/  
    /*--1、Vector narrow integer(窄指令): vmovn -> ri = ai[0...8]; copies the least  
    significant half of each element of a quadword vector into  
    the corresponding elements of a doubleword vector.--*/  
    int8x8_t vmovn_s16 (int16x8_t __a);  
    int16x4_t vmovn_s32 (int32x4_t __a);  
    int32x2_t vmovn_s64 (int64x2_t __a);  
    uint8x8_t vmovn_u16 (uint16x8_t __a);  
    uint16x4_t vmovn_u32 (uint32x4_t __a);  
    uint32x2_t vmovn_u64 (uint64x2_t __a);  
    /*--2、Vector long move(长指令): vmovl -> sign extends or zero extends each element 
    in a doubleword vector to twice its original length, 
    and places the results in a quadword vector.--*/  
    int16x8_t vmovl_s8 (int8x8_t __a);//_mm_cvtepi8_epi16  
    int32x4_t vmovl_s16 (int16x4_t __a);//_mm_cvtepi16_epi32  
    int64x2_t vmovl_s32 (int32x2_t __a);//_mm_cvtepi32_epi64  
    uint16x8_t vmovl_u8 (uint8x8_t __a);//_mm_cvtepu8_epi16  
    uint32x4_t vmovl_u16 (uint16x4_t __a);//_mm_cvtepu16_epi32  
    uint64x2_t vmovl_u32 (uint32x2_t __a);_mm_cvtepu32_epi64  
    /*--3、Vector saturating narrow integer(窄指令): vqmovn -> copies each element of the 
    operand vector to the corresponding element of the destination vector.  
    The result element is half the width of  
    the operand element, and values are saturated to the result width. 
    The results are the same type as the operands.--*/  
    int8x8_t vqmovn_s16 (int16x8_t __a);//_mm_packs_epi16  
    int16x4_t vqmovn_s32 (int32x4_t __a);//_mm_packs_epi32  
    int32x2_t vqmovn_s64 (int64x2_t __a);  
    uint8x8_t vqmovn_u16 (uint16x8_t __a);  
    uint16x4_t vqmovn_u32 (uint32x4_t __a);  
    uint32x2_t vqmovn_u64 (uint64x2_t __a);  
    /*--4、Vector saturating narrow integer signed->unsigned(窄指令): copies each element of 
    the operand vector to the corresponding element of the destination vector. 
    The result element is half the width of the operand element, 
    and values are saturated to the result width. 
    The elements in the operand are signed and the elements in the result are unsigned.--*/  
    uint8x8_t vqmovun_s16 (int16x8_t __a);//_mm_packus_epi16  
    uint16x4_t vqmovun_s32 (int32x4_t __a);//_mm_packus_epi32  
    uint32x2_t vqmovun_s64 (int64x2_t __a);  
    /******************************************************Table lookup*********************/  
    /*--1、Table lookup: vtbl -> uses byte indexes in a control vector to look up byte  
    values in a table and generate a new vector. Indexes out of range return 0.  
    The table is in Vector1 and uses one(or two or three or four)D registers.--*/  
    int8x8_t vtbl1_s8 (int8x8_t __a, int8x8_t __b);  
    uint8x8_t vtbl1_u8 (uint8x8_t __a, uint8x8_t __b);  
    poly8x8_t vtbl1_p8 (poly8x8_t __a, uint8x8_t __b);  
    int8x8_t vtbl2_s8 (int8x8x2_t __a, int8x8_t __b);  
    uint8x8_t vtbl2_u8 (uint8x8x2_t __a, uint8x8_t __b);  
    poly8x8_t vtbl2_p8 (poly8x8x2_t __a, uint8x8_t __b);  
    int8x8_t vtbl3_s8 (int8x8x3_t __a, int8x8_t __b);  
    uint8x8_t vtbl3_u8 (uint8x8x3_t __a, uint8x8_t __b);  
    poly8x8_t vtbl3_p8 (poly8x8x3_t __a, uint8x8_t __b);  
    int8x8_t vtbl4_s8 (int8x8x4_t __a, int8x8_t __b);  
    uint8x8_t vtbl4_u8 (uint8x8x4_t __a, uint8x8_t __b);  
    poly8x8_t vtbl4_p8 (poly8x8x4_t __a, uint8x8_t __b);  
    /*--2、Extended table lookup: vtbx -> uses byte indexes in a control vector to look up 
    byte values in a table and generate a new vector. Indexes out of range leave the  
    destination element unchanged.The table is in Vector2 and uses one(or two or three or 
    four) D register. Vector1 contains the elements of the destination vector.--*/  
    int8x8_t vtbx1_s8 (int8x8_t __a, int8x8_t __b, int8x8_t __c);  
    uint8x8_t vtbx1_u8 (uint8x8_t __a, uint8x8_t __b, uint8x8_t __c);  
    poly8x8_t vtbx1_p8 (poly8x8_t __a, poly8x8_t __b, uint8x8_t __c);  
    int8x8_t vtbx2_s8 (int8x8_t __a, int8x8x2_t __b, int8x8_t __c);  
    uint8x8_t vtbx2_u8 (uint8x8_t __a, uint8x8x2_t __b, uint8x8_t __c);  
    poly8x8_t vtbx2_p8 (poly8x8_t __a, poly8x8x2_t __b, uint8x8_t __c);  
    int8x8_t vtbx3_s8 (int8x8_t __a, int8x8x3_t __b, int8x8_t __c);  
    uint8x8_t vtbx3_u8 (uint8x8_t __a, uint8x8x3_t __b, uint8x8_t __c);  
    poly8x8_t vtbx3_p8 (poly8x8_t __a, poly8x8x3_t __b, uint8x8_t __c);  
    int8x8_t vtbx4_s8 (int8x8_t __a, int8x8x4_t __b, int8x8_t __c);  
    uint8x8_t vtbx4_u8 (uint8x8_t __a, uint8x8x4_t __b, uint8x8_t __c);  
    poly8x8_t vtbx4_p8 (poly8x8_t __a, poly8x8x4_t __b, uint8x8_t __c);  
    /***************************************Multiply, scalar, lane**************************/  
    /*--1、Vector multiply by scalar: vmul -> ri = ai * b;  
    multiplies each element in a vector by a scalar,  
    and places the results in the destination vector.--*/  
    int16x4_t vmul_n_s16 (int16x4_t __a, int16_t __b);  
    int32x2_t vmul_n_s32 (int32x2_t __a, int32_t __b);  
    float32x2_t vmul_n_f32 (float32x2_t __a, float32_t __b);  
    uint16x4_t vmul_n_u16 (uint16x4_t __a, uint16_t __b);  
    uint32x2_t vmul_n_u32 (uint32x2_t __a, uint32_t __b);  
    int16x8_t vmulq_n_s16 (int16x8_t __a, int16_t __b);  
    int32x4_t vmulq_n_s32 (int32x4_t __a, int32_t __b);  
    float32x4_t vmulq_n_f32 (float32x4_t __a, float32_t __b);  
    uint16x8_t vmulq_n_u16 (uint16x8_t __a, uint16_t __b);  
    uint32x4_t vmulq_n_u32 (uint32x4_t __a, uint32_t __b);  
    /*--2、Vector multiply by scalar: -> ri = ai * b[c];  
    multiplies the first vector by a scalar.  
    The scalar is the element in the second vector with index c.--*/  
    int16x4_t vmul_lane_s16 (int16x4_t __a, int16x4_t __b, const int __c);  
    int32x2_t vmul_lane_s32 (int32x2_t __a, int32x2_t __b, const int __c);  
    float32x2_t vmul_lane_f32 (float32x2_t __a, float32x2_t __b, const int __c);  
    uint16x4_t vmul_lane_u16 (uint16x4_t __a, uint16x4_t __b, const int __c);  
    uint32x2_t vmul_lane_u32 (uint32x2_t __a, uint32x2_t __b, const int __c);  
    int16x8_t vmulq_lane_s16 (int16x8_t __a, int16x4_t __b, const int __c);  
    int32x4_t vmulq_lane_s32 (int32x4_t __a, int32x2_t __b, const int __c);  
    float32x4_t vmulq_lane_f32 (float32x4_t __a, float32x2_t __b, const int __c);  
    uint16x8_t vmulq_lane_u16 (uint16x8_t __a, uint16x4_t __b, const int __c);  
    uint32x4_t vmulq_lane_u32 (uint32x4_t __a, uint32x2_t __b, const int __c);  
    /*--3、Vector long multiply with scalar: vmull ->  ri = ai * b; 
    multiplies a vector by a scalar.  
    Elements in the result are wider than elements in input vector.--*/  
    int32x4_t vmull_n_s16 (int16x4_t __a, int16_t __b);  
    int64x2_t vmull_n_s32 (int32x2_t __a, int32_t __b);  
    uint32x4_t vmull_n_u16 (uint16x4_t __a, uint16_t __b);  
    uint64x2_t vmull_n_u32 (uint32x2_t __a, uint32_t __b);  
    /*--4、Vector long multiply by scalar: vmull -> ri = ai * b[c]; 
    multiplies the first vector by a scalar.  
    The scalar is the element in the second vector with index c.  
    The elements in the result are wider than the elements in input vector.--*/  
    int32x4_t vmull_lane_s16 (int16x4_t __a, int16x4_t __b, const int __c);  
    int64x2_t vmull_lane_s32 (int32x2_t __a, int32x2_t __b, const int __c);  
    uint32x4_t vmull_lane_u16 (uint16x4_t __a, uint16x4_t __b, const int __c);  
    uint64x2_t vmull_lane_u32 (uint32x2_t __a, uint32x2_t __b, const int __c);  
    /*--5、Vector saturating doubling long multiply with scalar: vqdmull -> ri = sat(ai * b); 
    multiplies the elements in the vector by a scalar, and doubles the results.  
    If any of the results overflow, they are saturated and the sticky QC flag is set.--*/  
    int32x4_t vqdmull_n_s16 (int16x4_t __a, int16_t __b);  
    int64x2_t vqdmull_n_s32 (int32x2_t __a, int32_t __b);  
    /*--6、Vector saturating doubling long multiply by scalar: vqdmull -> ri = sat(ai * b[c]); 
    multiplies the elements in the first vector by a scalar, and doubles the results.  
    The scalar has index c in the second vector. If any of the results overflow,  
    they are saturated and the sticky QC flagis set.--*/  
    int32x4_t vqdmull_lane_s16 (int16x4_t __a, int16x4_t __b, const int __c);  
    int64x2_t vqdmull_lane_s32 (int32x2_t __a, int32x2_t __b, const int __c);  
    /*--7、Vector saturating doubling multiply high with scalar: vqdmulh -> ri = sat(ai * b) 
    multiplies the elements of the vector by a scalar, and doubles the results. 
    It then returns only the high half of the results. 
    If any of the results overflow, they are saturated and the sticky QC flag is set.--*/  
    int16x4_t vqdmulh_n_s16 (int16x4_t __a, int16_t __b);  
    int32x2_t vqdmulh_n_s32 (int32x2_t __a, int32_t __b);  
    int16x8_t vqdmulhq_n_s16 (int16x8_t __a, int16_t __b);  
    int32x4_t vqdmulhq_n_s32 (int32x4_t __a, int32_t __b);  
    /*--8、Vector saturating doubling multiply high by scalar:  
    vqdmulh -> ri = sat(ai * b[c]); 
    multiplies the elements of the first vector by a scalar, and doubles the results. It then 
    returns only the high half of the results. The scalar has index n in the second vector. 
    If any of the results overflow, they are saturated and the sticky QC flag is set.--*/  
    int16x4_t vqdmulh_lane_s16 (int16x4_t __a, int16x4_t __b, const int __c);  
    int32x2_t vqdmulh_lane_s32 (int32x2_t __a, int32x2_t __b, const int __c);  
    int16x8_t vqdmulhq_lane_s16 (int16x8_t __a, int16x4_t __b, const int __c);  
    int32x4_t vqdmulhq_lane_s32 (int32x4_t __a, int32x2_t __b, const int __c);  
    /*--9、Vector saturating rounding doubling multiply high with scalar:  
    vqqrdmulh -> ri = sat(ai * b); 
    multiplies the elements of the vector by a scalar and doubles the results.  
    It then returns only the high half of the rounded results.  
    If any of the results overflow, they are saturated and the sticky QC flag is set.--*/  
    int16x4_t vqrdmulh_n_s16 (int16x4_t __a, int16_t __b);  
    int32x2_t vqrdmulh_n_s32 (int32x2_t __a, int32_t __b);  
    int16x8_t vqrdmulhq_n_s16 (int16x8_t __a, int16_t __b);  
    int32x4_t vqrdmulhq_n_s32 (int32x4_t __a, int32_t __b);  
    /*--10、Vector rounding saturating doubling multiply high by scalar:  
    vqrdmulh -> ri = sat(ai * b[c]); 
    multiplies the elements of the first vector by a scalar and doubles the results. 
    It then returns only the high half of the rounded results. 
    The scalar has index n in the second vector. If any of the results overflow,  
    they are saturated and the sticky QC flag is set.--*/  
    int16x4_t vqrdmulh_lane_s16 (int16x4_t __a, int16x4_t __b, const int __c);  
    int32x2_t vqrdmulh_lane_s32 (int32x2_t __a, int32x2_t __b, const int __c);  
    int16x8_t vqrdmulhq_lane_s16 (int16x8_t __a, int16x4_t __b, const int __c);  
    int32x4_t vqrdmulhq_lane_s32 (int32x4_t __a, int32x2_t __b, const int __c);  
    /*--11、Vector multiply accumulate with scalar: vmla -> ri = ai + bi * c; 
    multiplies each element in the second vector by a scalar,  
    and adds the results to the corresponding elements of the first vector.--*/  
    int16x4_t vmla_n_s16 (int16x4_t __a, int16x4_t __b, int16_t __c);  
    int32x2_t vmla_n_s32 (int32x2_t __a, int32x2_t __b, int32_t __c);  
    float32x2_t vmla_n_f32 (float32x2_t __a, float32x2_t __b, float32_t __c);  
    uint16x4_t vmla_n_u16 (uint16x4_t __a, uint16x4_t __b, uint16_t __c);  
    uint32x2_t vmla_n_u32 (uint32x2_t __a, uint32x2_t __b, uint32_t __c);  
    int16x8_t vmlaq_n_s16 (int16x8_t __a, int16x8_t __b, int16_t __c);  
    int32x4_t vmlaq_n_s32 (int32x4_t __a, int32x4_t __b, int32_t __c);  
    float32x4_t vmlaq_n_f32 (float32x4_t __a, float32x4_t __b, float32_t __c);  
    uint16x8_t vmlaq_n_u16 (uint16x8_t __a, uint16x8_t __b, uint16_t __c);  
    uint32x4_t vmlaq_n_u32 (uint32x4_t __a, uint32x4_t __b, uint32_t __c);  
    /*--12、Vector multiply accumulate by scalar: vmla -> ri = ai + bi * c[d]; 
    multiplies each element in the second vector by a scalar,  
    and adds the results to the corresponding elements of the first vector.  
    The scalar has index d in the third vector.--*/  
    int16x4_t vmla_lane_s16 (int16x4_t __a, int16x4_t __b, int16x4_t __c, const int __d);  
    int32x2_t vmla_lane_s32 (int32x2_t __a, int32x2_t __b, int32x2_t __c, const int __d);  
    float32x2_t vmla_lane_f32 (float32x2_t __a, float32x2_t __b, float32x2_t __c,  
        const int __d);  
    uint16x4_t vmla_lane_u16 (uint16x4_t __a, uint16x4_t __b, uint16x4_t __c, const int __d);  
    uint32x2_t vmla_lane_u32 (uint32x2_t __a, uint32x2_t __b, uint32x2_t __c, const int __d);  
    int16x8_t vmlaq_lane_s16 (int16x8_t __a, int16x8_t __b, int16x4_t __c, const int __d);  
    int32x4_t vmlaq_lane_s32 (int32x4_t __a, int32x4_t __b, int32x2_t __c, const int __d);  
    float32x4_t vmlaq_lane_f32 (float32x4_t __a, float32x4_t __b, float32x2_t __c,  
        const int __d);  
    uint16x8_t vmlaq_lane_u16 (uint16x8_t __a, uint16x8_t __b, uint16x4_t __c, const int __d);  
    uint32x4_t vmlaq_lane_u32 (uint32x4_t __a, uint32x4_t __b, uint32x2_t __c, const int __d);  
    /*--13、Vector widening multiply accumulate with scalar: vmlal -> ri = ai + bi * c; 
    multiplies each element in the second vector by a scalar, and adds the results into the  
    corresponding elements of the first vector.  
    The scalar has index n in the third vector. The elements in the result are wider.--*/  
    int32x4_t vmlal_n_s16 (int32x4_t __a, int16x4_t __b, int16_t __c);  
    int64x2_t vmlal_n_s32 (int64x2_t __a, int32x2_t __b, int32_t __c);  
    uint32x4_t vmlal_n_u16 (uint32x4_t __a, uint16x4_t __b, uint16_t __c);  
    uint64x2_t vmlal_n_u32 (uint64x2_t __a, uint32x2_t __b, uint32_t __c);  
    /*--14、Vector widening multiply accumulate by scalar: vmlal -> ri = ai + bi * c[d]; 
    multiplies each element in the second vector by a scalar, and adds the results to the  
    corresponding elements of the first vector. The scalar has index d in the third vector. 
    The elements in the result are wider.--*/  
    int32x4_t vmlal_lane_s16 (int32x4_t __a, int16x4_t __b, int16x4_t __c, const int __d);  
    int64x2_t vmlal_lane_s32 (int64x2_t __a, int32x2_t __b, int32x2_t __c, const int __d);  
    uint32x4_t vmlal_lane_u16 (uint32x4_t __a, uint16x4_t __b, uint16x4_t __c, const int __d);  
    uint64x2_t vmlal_lane_u32 (uint64x2_t __a, uint32x2_t __b, uint32x2_t __c, const int __d);  
    /*--15、Vector widening saturating doubling multiply accumulate with scalar:  
    vqdmlal -> ri = sat(ai + bi * c); 
    multiplies the elements in the second vector by a scalar, and doubles the results.  
    It then adds the results to the elements in the first vector. 
    If any of the results overflow, they are saturated and the sticky QC flag is set.--*/  
    int32x4_t vqdmlal_n_s16 (int32x4_t __a, int16x4_t __b, int16_t __c);  
    int64x2_t vqdmlal_n_s32 (int64x2_t __a, int32x2_t __b, int32_t __c);  
    /*--16、Vector widening saturating doubling multiply accumulate by scalar:  
    vqdmlal -> ri = sat(ai + bi * c[d]) 
    multiplies each element in the second vector by a scalar, doubles the results and adds  
    them to the corresponding elements of the first vector. The scalar has index d in the  
    third vector. If any of the results overflow, 
    they are saturated and the sticky QC flag is set.--*/  
    int32x4_t vqdmlal_lane_s16 (int32x4_t __a, int16x4_t __b, int16x4_t __c, const int __d);  
    int64x2_t vqdmlal_lane_s32 (int64x2_t __a, int32x2_t __b, int32x2_t __c, const int __d);  
    /*--17、Vector multiply subtract with scalar: vmls -> ri = ai - bi * c; 
    multiplies each element in a vector by a scalar, subtracts the results from the  
    corresponding elements of the destination vector,  
    and places the final results in the destination vector.--*/  
    int16x4_t vmls_n_s16 (int16x4_t __a, int16x4_t __b, int16_t __c);  
    int32x2_t vmls_n_s32 (int32x2_t __a, int32x2_t __b, int32_t __c);  
    float32x2_t vmls_n_f32 (float32x2_t __a, float32x2_t __b, float32_t __c);  
    uint16x4_t vmls_n_u16 (uint16x4_t __a, uint16x4_t __b, uint16_t __c);  
    uint32x2_t vmls_n_u32 (uint32x2_t __a, uint32x2_t __b, uint32_t __c);  
    int16x8_t vmlsq_n_s16 (int16x8_t __a, int16x8_t __b, int16_t __c);  
    int32x4_t vmlsq_n_s32 (int32x4_t __a, int32x4_t __b, int32_t __c);  
    float32x4_t vmlsq_n_f32 (float32x4_t __a, float32x4_t __b, float32_t __c);  
    uint16x8_t vmlsq_n_u16 (uint16x8_t __a, uint16x8_t __b, uint16_t __c);  
    uint32x4_t vmlsq_n_u32 (uint32x4_t __a, uint32x4_t __b, uint32_t __c);  
    /*--18、Vector multiply subtract by scalar: vmls -> ri = ai - bi * c[d]; 
    multiplies each element in the second vector by a scalar, and subtracts them from the 
    corresponding elements of the first vector. 
    The scalar has index d in the third vector.--*/  
    int16x4_t vmls_lane_s16 (int16x4_t __a, int16x4_t __b, int16x4_t __c, const int __d);  
    int32x2_t vmls_lane_s32 (int32x2_t __a, int32x2_t __b, int32x2_t __c, const int __d);  
    float32x2_t vmls_lane_f32 (float32x2_t __a, float32x2_t __b, float32x2_t __c,  
        const int __d);  
    uint16x4_t vmls_lane_u16 (uint16x4_t __a, uint16x4_t __b, uint16x4_t __c, const int __d);  
    uint32x2_t vmls_lane_u32 (uint32x2_t __a, uint32x2_t __b, uint32x2_t __c, const int __d);  
    int16x8_t vmlsq_lane_s16 (int16x8_t __a, int16x8_t __b, int16x4_t __c, const int __d);  
    int32x4_t vmlsq_lane_s32 (int32x4_t __a, int32x4_t __b, int32x2_t __c, const int __d);  
    float32x4_t vmlsq_lane_f32 (float32x4_t __a, float32x4_t __b, float32x2_t __c,  
        const int __d);  
    uint16x8_t vmlsq_lane_u16 (uint16x8_t __a, uint16x8_t __b, uint16x4_t __c, const int __d);  
    uint32x4_t vmlsq_lane_u32 (uint32x4_t __a, uint32x4_t __b, uint32x2_t __c, const int __d);  
    /*--19、Vector widening multiply subtract with scalar: vmlsl -> ri = ai - bi * c; 
    multiplies the elements in the second vector by a scalar, then subtracts the results from 
    the elements in the first vector. The elements of the result are wider.--*/  
    int32x4_t vmlsl_n_s16 (int32x4_t __a, int16x4_t __b, int16_t __c);  
    int64x2_t vmlsl_n_s32 (int64x2_t __a, int32x2_t __b, int32_t __c);  
    uint32x4_t vmlsl_n_u16 (uint32x4_t __a, uint16x4_t __b, uint16_t __c);  
    uint64x2_t vmlsl_n_u32 (uint64x2_t __a, uint32x2_t __b, uint32_t __c);  
    /*--20、Vector widening multiply subtract by scalar: vmlsl -> ri = ai - bi * c[d]; 
    multiplies each element in the second vector by a scalar,  
    and subtracts them from the corresponding elements of the first vector.  
    The scalar has index d in the third vector. The elements in the result are wider.--*/  
    int32x4_t vmlsl_lane_s16 (int32x4_t __a, int16x4_t __b, int16x4_t __c, const int __d);  
    int64x2_t vmlsl_lane_s32 (int64x2_t __a, int32x2_t __b, int32x2_t __c, const int __d);  
    uint32x4_t vmlsl_lane_u16 (uint32x4_t __a, uint16x4_t __b, uint16x4_t __c, const int __d)  
    uint64x2_t vmlsl_lane_u32 (uint64x2_t __a, uint32x2_t __b, uint32x2_t __c, const int __d);  
    /*--21、Vector widening saturating doubling multiply subtract with scalar:  
    vqdmlsl -> ri = sat(ai - bi * c); 
    multiplies the elements of the second vector with a scalar and doubles the results.  
    It then subtracts the results from the elements in the first vector. 
    If any of the results overflow, they are saturated and the sticky QC flag is set.--*/  
    int32x4_t vqdmlsl_n_s16 (int32x4_t __a, int16x4_t __b, int16_t __c);  
    int64x2_t vqdmlsl_n_s32 (int64x2_t __a, int32x2_t __b, int32_t __c);  
    /*--22、Vector widening saturating doubling multiply subtract by scalar: 
    vqdmlsl -> ri = sat(ai - bi * c[[d]); 
    multiplies each element in the second vector by a scalar, doubles the results and subtracts 
    them from the corresponding elements of the first vector. The scalar has index n in the  
    third vector.If any of the results overflow,  
    they are saturated and the sticky QC flag is set.--*/  
    int32x4_t vqdmlsl_lane_s16 (int32x4_t __a, int16x4_t __b, int16x4_t __c, const int __d);  
    int64x2_t vqdmlsl_lane_s32 (int64x2_t __a, int32x2_t __b, int32x2_t __c, const int __d);  
    /*****************************************************Vector extract********************/  
    /*--Vector extract: vext -> extracts n elements from the lower end of the second operand 
    vector and the remaining elements from the higher end of the first, and combines them to 
    form the result vector. The elements from the second operand are placed in the most  
    significant part of the result vector.The elements from the first operand are placed in 
    the least significant part of the result vector.This intrinsic cycles the elements 
    through the lanes if the two input vectors are the same.--*/  
    int8x8_t vext_s8 (int8x8_t __a, int8x8_t __b, const int __c);  
    int16x4_t vext_s16 (int16x4_t __a, int16x4_t __b, const int __c);  
    int32x2_t vext_s32 (int32x2_t __a, int32x2_t __b, const int __c);  
    int64x1_t vext_s64 (int64x1_t __a, int64x1_t __b, const int __c);  
    float32x2_t vext_f32 (float32x2_t __a, float32x2_t __b, const int __c);  
    uint8x8_t vext_u8 (uint8x8_t __a, uint8x8_t __b, const int __c);  
    uint16x4_t vext_u16 (uint16x4_t __a, uint16x4_t __b, const int __c);  
    uint32x2_t vext_u32 (uint32x2_t __a, uint32x2_t __b, const int __c);  
    uint64x1_t vext_u64 (uint64x1_t __a, uint64x1_t __b, const int __c);  
    poly8x8_t vext_p8 (poly8x8_t __a, poly8x8_t __b, const int __c);  
    poly16x4_t vext_p16 (poly16x4_t __a, poly16x4_t __b, const int __c);  
    int8x16_t vextq_s8 (int8x16_t __a, int8x16_t __b, const int __c);//_mm_alignr_epi8   
    int16x8_t vextq_s16 (int16x8_t __a, int16x8_t __b, const int __c);//_mm_alignr_epi8   
    int32x4_t vextq_s32 (int32x4_t __a, int32x4_t __b, const int __c);//_mm_alignr_epi8  
    int64x2_t vextq_s64 (int64x2_t __a, int64x2_t __b, const int __c);//_mm_alignr_epi8  
    float32x4_t vextq_f32 (float32x4_t __a, float32x4_t __b, const int __c);//_mm_alignr_epi8  
    uint8x16_t vextq_u8 (uint8x16_t __a, uint8x16_t __b, const int __c);//_mm_alignr_epi8  
    uint16x8_t vextq_u16 (uint16x8_t __a, uint16x8_t __b, const int __c);//_mm_alignr_epi8  
    uint32x4_t vextq_u32 (uint32x4_t __a, uint32x4_t __b, const int __c);//_mm_alignr_epi8  
    uint64x2_t vextq_u64 (uint64x2_t __a, uint64x2_t __b, const int __c);//_mm_alignr_epi8  
    poly8x16_t vextq_p8 (poly8x16_t __a, poly8x16_t __b, const int __c);//_mm_alignr_epi8  
    poly16x8_t vextq_p16 (poly16x8_t __a, poly16x8_t __b, const int __c);//_mm_alignr_epi8  
    /****************************************************Reverse elements*******************/  
    /*--1、Reverse vector elements (swap endianness): vrev64 -> reverses the order of 8-bit,  
    16-bit, or 32-bit elements within each doubleword of the vector,  
    and places the result in the corresponding destination vector.--*/  
    int8x8_t vrev64_s8 (int8x8_t __a);  
    int16x4_t vrev64_s16 (int16x4_t __a);  
    int32x2_t vrev64_s32 (int32x2_t __a);  
    float32x2_t vrev64_f32 (float32x2_t __a);//_mm_shuffle_ps  
    uint8x8_t vrev64_u8 (uint8x8_t __a);  
    uint16x4_t vrev64_u16 (uint16x4_t __a);  
    uint32x2_t vrev64_u32 (uint32x2_t __a);  
    poly8x8_t vrev64_p8 (poly8x8_t __a);  
    poly16x4_t vrev64_p16 (poly16x4_t __a);  
    int8x16_t vrev64q_s8 (int8x16_t __a);  
    int16x8_t vrev64q_s16 (int16x8_t __a);  
    int32x4_t vrev64q_s32 (int32x4_t __a);  
    float32x4_t vrev64q_f32 (float32x4_t __a);//_mm_shuffle_ps  
    uint8x16_t vrev64q_u8 (uint8x16_t __a);  
    uint16x8_t vrev64q_u16 (uint16x8_t __a);  
    uint32x4_t vrev64q_u32 (uint32x4_t __a);  
    poly8x16_t vrev64q_p8 (poly8x16_t __a);  
    poly16x8_t vrev64q_p16 (poly16x8_t __a);  
    /*--2、Reverse vector elements (swap endianness): vrev32 -> reverses the order of 8-bit  
    or 16-bit elements within each word of the vector,  
    and places the result in the corresponding destination vector.--*/  
    int8x8_t vrev32_s8 (int8x8_t __a);  
    int16x4_t vrev32_s16 (int16x4_t __a);  
    uint8x8_t vrev32_u8 (uint8x8_t __a);  
    uint16x4_t vrev32_u16 (uint16x4_t __a);  
    poly8x8_t vrev32_p8 (poly8x8_t __a);  
    poly16x4_t vrev32_p16 (poly16x4_t __a);  
    int8x16_t vrev32q_s8 (int8x16_t __a);  
    int16x8_t vrev32q_s16 (int16x8_t __a);  
    uint8x16_t vrev32q_u8 (uint8x16_t __a);  
    uint16x8_t vrev32q_u16 (uint16x8_t __a);  
    poly8x16_t vrev32q_p8 (poly8x16_t __a);  
    poly16x8_t vrev32q_p16 (poly16x8_t __a);  
    /*--3、Reverse vector elements (swap endianness): vrev16 -> reverses the order  
    of 8-bit elements within each halfword of the vector,  
    and places the result in the corresponding destination vector.--*/  
    int8x8_t vrev16_s8 (int8x8_t __a);  
    uint8x8_t vrev16_u8 (uint8x8_t __a);  
    poly8x8_t vrev16_p8 (poly8x8_t __a);  
    int8x16_t vrev16q_s8 (int8x16_t __a);  
    uint8x16_t vrev16q_u8 (uint8x16_t __a);  
    poly8x16_t vrev16q_p8 (poly8x16_t __a);  
    /**********************************************************Bitwise Select***************/  
    /*--Bitwise Select: vbsl -> selects each bit for the destination from the first operand  
    if the corresponding bit of the destination is 1,  
    or from the second operand if the corresponding bit of the destination is 0.--*/  
    int8x8_t vbsl_s8 (uint8x8_t __a, int8x8_t __b, int8x8_t __c);  
    int16x4_t vbsl_s16 (uint16x4_t __a, int16x4_t __b, int16x4_t __c);  
    int32x2_t vbsl_s32 (uint32x2_t __a, int32x2_t __b, int32x2_t __c);  
    int64x1_t vbsl_s64 (uint64x1_t __a, int64x1_t __b, int64x1_t __c);  
    float32x2_t vbsl_f32 (uint32x2_t __a, float32x2_t __b, float32x2_t __c);  
    uint8x8_t vbsl_u8 (uint8x8_t __a, uint8x8_t __b, uint8x8_t __c);  
    uint16x4_t vbsl_u16 (uint16x4_t __a, uint16x4_t __b, uint16x4_t __c);  
    uint32x2_t vbsl_u32 (uint32x2_t __a, uint32x2_t __b, uint32x2_t __c);  
    uint64x1_t vbsl_u64 (uint64x1_t __a, uint64x1_t __b, uint64x1_t __c);  
    poly8x8_t vbsl_p8 (uint8x8_t __a, poly8x8_t __b, poly8x8_t __c);  
    poly16x4_t vbsl_p16 (uint16x4_t __a, poly16x4_t __b, poly16x4_t __c);  
    int8x16_t vbslq_s8 (uint8x16_t __a, int8x16_t __b, int8x16_t __c);  
    int16x8_t vbslq_s16 (uint16x8_t __a, int16x8_t __b, int16x8_t __c);  
    int32x4_t vbslq_s32 (uint32x4_t __a, int32x4_t __b, int32x4_t __c);  
    int64x2_t vbslq_s64 (uint64x2_t __a, int64x2_t __b, int64x2_t __c);  
    float32x4_t vbslq_f32 (uint32x4_t __a, float32x4_t __b, float32x4_t __c);  
    uint8x16_t vbslq_u8 (uint8x16_t __a, uint8x16_t __b, uint8x16_t __c);  
    uint16x8_t vbslq_u16 (uint16x8_t __a, uint16x8_t __b, uint16x8_t __c);  
    uint32x4_t vbslq_u32 (uint32x4_t __a, uint32x4_t __b, uint32x4_t __c);  
    uint64x2_t vbslq_u64 (uint64x2_t __a, uint64x2_t __b, uint64x2_t __c);  
    poly8x16_t vbslq_p8 (uint8x16_t __a, poly8x16_t __b, poly8x16_t __c);  
    poly16x8_t vbslq_p16 (uint16x8_t __a, poly16x8_t __b, poly16x8_t __c);  
    /************************************Transposition operations***************************/  
    /*--1、Transpose elements: vtrn -> treats the elements of its input vectors as elements 
    of 2 x 2 matrices, and transposes the matrices. Essentially, it exchanges the elements  
    with odd indices from Vector1 with the elements with even indices from Vector2.--*/  
    int8x8x2_t vtrn_s8 (int8x8_t __a, int8x8_t __b);  
    int16x4x2_t vtrn_s16 (int16x4_t __a, int16x4_t __b);  
    uint8x8x2_t vtrn_u8 (uint8x8_t __a, uint8x8_t __b);  
    uint16x4x2_t vtrn_u16 (uint16x4_t __a, uint16x4_t __b);  
    poly8x8x2_t vtrn_p8 (poly8x8_t __a, poly8x8_t __b);  
    poly16x4x2_t vtrn_p16 (poly16x4_t __a, poly16x4_t __b);  
    int32x2x2_t vtrn_s32 (int32x2_t __a, int32x2_t __b)  
    float32x2x2_t vtrn_f32 (float32x2_t __a, float32x2_t __b)  
    uint32x2x2_t vtrn_u32 (uint32x2_t __a, uint32x2_t __b)  
    int8x16x2_t vtrnq_s8 (int8x16_t __a, int8x16_t __b)  
    int16x8x2_t vtrnq_s16 (int16x8_t __a, int16x8_t __b)  
    int32x4x2_t vtrnq_s32 (int32x4_t __a, int32x4_t __b)  
    float32x4x2_t vtrnq_f32 (float32x4_t __a, float32x4_t __b)  
    uint8x16x2_t vtrnq_u8 (uint8x16_t __a, uint8x16_t __b)  
    uint16x8x2_t vtrnq_u16 (uint16x8_t __a, uint16x8_t __b)  
    uint32x4x2_t vtrnq_u32 (uint32x4_t __a, uint32x4_t __b);  
    poly8x16x2_t vtrnq_p8 (poly8x16_t __a, poly8x16_t __b);  
    poly16x8x2_t vtrnq_p16 (poly16x8_t __a, poly16x8_t __b);  
    /*--2、Interleave elements(Zip elements):  
    vzip ->  (Vector Zip) interleaves the elements of two vectors.--*/  
    int8x8x2_t vzip_s8 (int8x8_t __a, int8x8_t __b);  
    int16x4x2_t vzip_s16 (int16x4_t __a, int16x4_t __b);  
    uint8x8x2_t vzip_u8 (uint8x8_t __a, uint8x8_t __b);  
    uint16x4x2_t vzip_u16 (uint16x4_t __a, uint16x4_t __b);  
    poly8x8x2_t vzip_p8 (poly8x8_t __a, poly8x8_t __b);  
    poly16x4x2_t vzip_p16 (poly16x4_t __a, poly16x4_t __b);  
    int32x2x2_t vzip_s32 (int32x2_t __a, int32x2_t __b);  
    float32x2x2_t vzip_f32 (float32x2_t __a, float32x2_t __b);  
    uint32x2x2_t vzip_u32 (uint32x2_t __a, uint32x2_t __b);  
    int8x16x2_t vzipq_s8 (int8x16_t __a, int8x16_t __b);  
    int16x8x2_t vzipq_s16 (int16x8_t __a, int16x8_t __b);  
    int32x4x2_t vzipq_s32 (int32x4_t __a, int32x4_t __b);  
    float32x4x2_t vzipq_f32 (float32x4_t __a, float32x4_t __b);  
    uint8x16x2_t vzipq_u8 (uint8x16_t __a, uint8x16_t __b);  
    uint16x8x2_t vzipq_u16 (uint16x8_t __a, uint16x8_t __b);  
    uint32x4x2_t vzipq_u32 (uint32x4_t __a, uint32x4_t __b);  
    poly8x16x2_t vzipq_p8 (poly8x16_t __a, poly8x16_t __b);  
    poly16x8x2_t vzipq_p16 (poly16x8_t __a, poly16x8_t __b);  
    /*--3、De-Interleave elements(Unzip elements):  
    vuzp -> (Vector Unzip) de-interleaves the elements of two vectors. 
    De-interleaving is the inverse process of interleaving.--*/  
    int8x8x2_t vuzp_s8 (int8x8_t __a, int8x8_t __b);  
    int16x4x2_t vuzp_s16 (int16x4_t __a, int16x4_t __b);  
    int32x2x2_t vuzp_s32 (int32x2_t __a, int32x2_t __b);  
    float32x2x2_t vuzp_f32 (float32x2_t __a, float32x2_t __b);  
    uint8x8x2_t vuzp_u8 (uint8x8_t __a, uint8x8_t __b);  
    uint16x4x2_t vuzp_u16 (uint16x4_t __a, uint16x4_t __b);  
    uint32x2x2_t vuzp_u32 (uint32x2_t __a, uint32x2_t __b);  
    poly8x8x2_t vuzp_p8 (poly8x8_t __a, poly8x8_t __b);  
    poly16x4x2_t vuzp_p16 (poly16x4_t __a, poly16x4_t __b);  
    int8x16x2_t vuzpq_s8 (int8x16_t __a, int8x16_t __b);  
    int16x8x2_t vuzpq_s16 (int16x8_t __a, int16x8_t __b);  
    int32x4x2_t vuzpq_s32 (int32x4_t __a, int32x4_t __b);  
    float32x4x2_t vuzpq_f32 (float32x4_t __a, float32x4_t __b);  
    uint8x16x2_t vuzpq_u8 (uint8x16_t __a, uint8x16_t __b);  
    uint16x8x2_t vuzpq_u16 (uint16x8_t __a, uint16x8_t __b);  
    uint32x4x2_t vuzpq_u32 (uint32x4_t __a, uint32x4_t __b);  
    poly8x16x2_t vuzpq_p8 (poly8x16_t __a, poly8x16_t __b);  
    poly16x8x2_t vuzpq_p16 (poly16x8_t __a, poly16x8_t __b);  
    /*********************************************************Load**************************/  
    /*--1、Load a single vector from memory: vld1 -> loads a vector from memory.--*/  
    int8x8_t vld1_s8 (const int8_t * __a);  
    int16x4_t vld1_s16 (const int16_t * __a);  
    int32x2_t vld1_s32 (const int32_t * __a);  
    int64x1_t vld1_s64 (const int64_t * __a);  
    float32x2_t vld1_f32 (const float32_t * __a);  
    uint8x8_t vld1_u8 (const uint8_t * __a);//_mm_loadl_epi64  
    uint16x4_t vld1_u16 (const uint16_t * __a);//_mm_loadl_epi64  
    uint32x2_t vld1_u32 (const uint32_t * __a);//_mm_loadl_epi64  
    uint64x1_t vld1_u64 (const uint64_t * __a);//_mm_loadl_epi64  
    poly8x8_t vld1_p8 (const poly8_t * __a);  
    poly16x4_t vld1_p16 (const poly16_t * __a);  
    int8x16_t vld1q_s8 (const int8_t * __a);  
    int16x8_t vld1q_s16 (const int16_t * __a);  
    int32x4_t vld1q_s32 (const int32_t * __a);  
    int64x2_t vld1q_s64 (const int64_t * __a);  
    float32x4_t vld1q_f32 (const float32_t * __a);  
    uint8x16_t vld1q_u8 (const uint8_t * __a);  
    uint16x8_t vld1q_u16 (const uint16_t * __a);  
    uint32x4_t vld1q_u32 (const uint32_t * __a);  
    uint64x2_t vld1q_u64 (const uint64_t * __a);  
    poly8x16_t vld1q_p8 (const poly8_t * __a);  
    poly16x8_t vld1q_p16 (const poly16_t * __a);  
    /*--2、Load a single lane from memory: vld1 -> loads one element of the input vector  
    from memory and returns this in the result vector. Elements of the vector that are not 
    loaded are returned in the result vector unaltered.  
    c is the index of the element to load.--*/  
    int8x8_t vld1_lane_s8 (const int8_t * __a, int8x8_t __b, const int __c);//_mm_insert_epi8  
    int16x4_t vld1_lane_s16 (const int16_t * __a, int16x4_t __b,  
        const int __c);//_mm_insert_epi16  
    int32x2_t vld1_lane_s32 (const int32_t * __a, int32x2_t __b,   
        const int __c);//_mm_insert_epi32  
    float32x2_t vld1_lane_f32 (const float32_t * __a, float32x2_t __b, const int __c);  
    uint8x8_t vld1_lane_u8 (const uint8_t * __a, uint8x8_t __b,   
        const int __c);//_mm_insert_epi8  
    uint16x4_t vld1_lane_u16 (const uint16_t * __a, uint16x4_t __b,   
        const int __c);//_mm_insert_epi16  
    uint32x2_t vld1_lane_u32 (const uint32_t * __a, uint32x2_t __b,   
        const int __c);//_mm_insert_epi32  
    poly8x8_t vld1_lane_p8 (const poly8_t * __a, poly8x8_t __b,   
        const int __c);//_mm_insert_epi8  
    poly16x4_t vld1_lane_p16 (const poly16_t * __a, poly16x4_t __b,   
        const int __c);//_mm_insert_epi16  
    int64x1_t vld1_lane_s64 (const int64_t * __a, int64x1_t __b, const int __c);  
    uint64x1_t vld1_lane_u64 (const uint64_t * __a, uint64x1_t __b, const int __c);  
    int8x16_t vld1q_lane_s8 (const int8_t * __a, int8x16_t __b,   
        const int __c);//_mm_insert_epi8  
    int16x8_t vld1q_lane_s16 (const int16_t * __a, int16x8_t __b,   
        const int __c);//_mm_insert_epi16  
    int32x4_t vld1q_lane_s32 (const int32_t * __a, int32x4_t __b,   
        const int __c);//_mm_insert_epi32  
    float32x4_t vld1q_lane_f32 (const float32_t * __a, float32x4_t __b, const int __c);  
    uint8x16_t vld1q_lane_u8 (const uint8_t * __a, uint8x16_t __b,   
        const int __c);//_mm_insert_epi8  
    uint16x8_t vld1q_lane_u16 (const uint16_t * __a, uint16x8_t __b,   
        const int __c);//_mm_insert_epi16  
    uint32x4_t vld1q_lane_u32 (const uint32_t * __a, uint32x4_t __b,   
        const int __c);//_mm_insert_epi32  
    poly8x16_t vld1q_lane_p8 (const poly8_t * __a, poly8x16_t __b,   
        const int __c);//_mm_insert_epi8  
    poly16x8_t vld1q_lane_p16 (const poly16_t * __a, poly16x8_t __b,   
        const int __c);//_mm_insert_epi16  
    int64x2_t vld1q_lane_s64 (const int64_t * __a, int64x2_t __b,   
        const int __c);//_mm_insert_epi64  
    uint64x2_t vld1q_lane_u64 (const uint64_t * __a, uint64x2_t __b,   
        const int __c);//_mm_insert_epi64  
    /*--3、Load all lanes of vector with same value from memory: vld1 ->  
    loads one element in a vector from memory.  
    The loaded element is copied to all other lanes of the vector.--*/  
    int8x8_t vld1_dup_s8 (const int8_t * __a);//_mm_set1_epi8  
    int16x4_t vld1_dup_s16 (const int16_t * __a);//_mm_set1_epi16  
    int32x2_t vld1_dup_s32 (const int32_t * __a);//_mm_set1_epi32  
    float32x2_t vld1_dup_f32 (const float32_t * __a);//_mm_set1_ps  
    uint8x8_t vld1_dup_u8 (const uint8_t * __a);//_mm_set1_epi8  
    uint16x4_t vld1_dup_u16 (const uint16_t * __a);//_mm_set1_epi16  
    uint32x2_t vld1_dup_u32 (const uint32_t * __a);//_mm_set1_epi32  
    poly8x8_t vld1_dup_p8 (const poly8_t * __a);//_mm_set1_epi8  
    poly16x4_t vld1_dup_p16 (const poly16_t * __a);//_mm_set1_epi16  
    int64x1_t vld1_dup_s64 (const int64_t * __a);  
    uint64x1_t vld1_dup_u64 (const uint64_t * __a);  
    int8x16_t vld1q_dup_s8 (const int8_t * __a);//_mm_set1_epi8  
    int16x8_t vld1q_dup_s16 (const int16_t * __a);//_mm_set1_epi16  
    int32x4_t vld1q_dup_s32 (const int32_t * __a);//_mm_set1_epi32  
    float32x4_t vld1q_dup_f32 (const float32_t * __a);//_mm_set1_ps  
    uint8x16_t vld1q_dup_u8 (const uint8_t * __a);//_mm_set1_epi8  
    uint16x8_t vld1q_dup_u16 (const uint16_t * __a);//_mm_set1_epi16  
    uint32x4_t vld1q_dup_u32 (const uint32_t * __a);//_mm_set1_epi32  
    poly8x16_t vld1q_dup_p8 (const poly8_t * __a);//_mm_set1_epi8  
    poly16x8_t vld1q_dup_p16 (const poly16_t * __a);//_mm_set1_epi16  
    int64x2_t vld1q_dup_s64 (const int64_t * __a);  
    uint64x2_t vld1q_dup_u64 (const uint64_t * __a);  
    /*--4、Load 2-element structure from memory: vld2 -> loads 2 vectors from memory.  
    It performs a 2-way de-interleave from memory to the vectors.--*/  
    int8x8x2_t vld2_s8 (const int8_t * __a);  
    int16x4x2_t vld2_s16 (const int16_t * __a);  
    int32x2x2_t vld2_s32 (const int32_t * __a);  
    float32x2x2_t vld2_f32 (const float32_t * __a);  
    uint8x8x2_t vld2_u8 (const uint8_t * __a);  
    uint16x4x2_t vld2_u16 (const uint16_t * __a);  
    uint32x2x2_t vld2_u32 (const uint32_t * __a);  
    poly8x8x2_t vld2_p8 (const poly8_t * __a);  
    poly16x4x2_t vld2_p16 (const poly16_t * __a);  
    int64x1x2_t vld2_s64 (const int64_t * __a);  
    uint64x1x2_t vld2_u64 (const uint64_t * __a);  
    int8x16x2_t vld2q_s8 (const int8_t * __a);  
    int16x8x2_t vld2q_s16 (const int16_t * __a);  
    int32x4x2_t vld2q_s32 (const int32_t * __a);  
    float32x4x2_t vld2q_f32 (const float32_t * __a);  
    uint8x16x2_t vld2q_u8 (const uint8_t * __a);  
    uint16x8x2_t vld2q_u16 (const uint16_t * __a);  
    uint32x4x2_t vld2q_u32 (const uint32_t * __a);  
    poly8x16x2_t vld2q_p8 (const poly8_t * __a);  
    poly16x8x2_t vld2q_p16 (const poly16_t * __a);  
    /*--5、Load a single lane of 2-element structure from memory: vld2 ->  
    loads two elements in a double-vector structure from memory and returns this in  
    the result. The loaded values are from consecutive memory addresses.  
    Elements in the structure that are not loaded are returned in the result unaltered.  
    c is the index of the elements to load.--*/  
    int8x8x2_t vld2_lane_s8 (const int8_t * __a, int8x8x2_t __b, const int __c);  
    int16x4x2_t vld2_lane_s16 (const int16_t * __a, int16x4x2_t __b, const int __c);  
    int32x2x2_t vld2_lane_s32 (const int32_t * __a, int32x2x2_t __b, const int __c);  
    float32x2x2_t vld2_lane_f32 (const float32_t * __a, float32x2x2_t __b, const int __c);  
    uint8x8x2_t vld2_lane_u8 (const uint8_t * __a, uint8x8x2_t __b, const int __c);  
    uint16x4x2_t vld2_lane_u16 (const uint16_t * __a, uint16x4x2_t __b, const int __c);  
    uint32x2x2_t vld2_lane_u32 (const uint32_t * __a, uint32x2x2_t __b, const int __c);  
    poly8x8x2_t vld2_lane_p8 (const poly8_t * __a, poly8x8x2_t __b, const int __c);  
    poly16x4x2_t vld2_lane_p16 (const poly16_t * __a, poly16x4x2_t __b, const int __c);  
    int16x8x2_t vld2q_lane_s16 (const int16_t * __a, int16x8x2_t __b, const int __c);  
    int32x4x2_t vld2q_lane_s32 (const int32_t * __a, int32x4x2_t __b, const int __c);  
    float32x4x2_t vld2q_lane_f32 (const float32_t * __a, float32x4x2_t __b, const int __c);  
    uint16x8x2_t vld2q_lane_u16 (const uint16_t * __a, uint16x8x2_t __b, const int __c);  
    uint32x4x2_t vld2q_lane_u32 (const uint32_t * __a, uint32x4x2_t __b, const int __c);  
    poly16x8x2_t vld2q_lane_p16 (const poly16_t * __a, poly16x8x2_t __b, const int __c);  
    /*--6、Load all lanes of 2-element structure with same value from memory: vld2 ->  
    loads 2 elements from memory and returns a double-vector structure.  
    The first element is copied to all lanes of the first vector.  
    The second element is copied to all lanes of the second vector.--*/  
    int8x8x2_t vld2_dup_s8 (const int8_t * __a);  
    int16x4x2_t vld2_dup_s16 (const int16_t * __a);  
    int32x2x2_t vld2_dup_s32 (const int32_t * __a);  
    float32x2x2_t vld2_dup_f32 (const float32_t * __a);  
    uint8x8x2_t vld2_dup_u8 (const uint8_t * __a);  
    uint16x4x2_t vld2_dup_u16 (const uint16_t * __a);  
    uint32x2x2_t vld2_dup_u32 (const uint32_t * __a);  
    poly8x8x2_t vld2_dup_p8 (const poly8_t * __a);  
    poly16x4x2_t vld2_dup_p16 (const poly16_t * __a);  
    int64x1x2_t vld2_dup_s64 (const int64_t * __a);  
    uint64x1x2_t vld2_dup_u64 (const uint64_t * __a);  
    /*--7、Load 3-element structure from memory: vld3 ->  
    loads 3 vectors from memory.  
    It performs a 3-way de-interleave from memory to the vectors.--*/  
    int8x8x3_t vld3_s8 (const int8_t * __a);  
    int16x4x3_t vld3_s16 (const int16_t * __a);  
    int32x2x3_t vld3_s32 (const int32_t * __a);  
    float32x2x3_t vld3_f32 (const float32_t * __a);  
    uint8x8x3_t vld3_u8 (const uint8_t * __a);  
    uint16x4x3_t vld3_u16 (const uint16_t * __a);  
    uint32x2x3_t vld3_u32 (const uint32_t * __a);  
    poly8x8x3_t vld3_p8 (const poly8_t * __a);  
    poly16x4x3_t vld3_p16 (const poly16_t * __a);  
    int64x1x3_t vld3_s64 (const int64_t * __a);  
    uint64x1x3_t vld3_u64 (const uint64_t * __a);  
    int8x16x3_t vld3q_s8 (const int8_t * __a);  
    int16x8x3_t vld3q_s16 (const int16_t * __a);  
    int32x4x3_t vld3q_s32 (const int32_t * __a);  
    float32x4x3_t vld3q_f32 (const float32_t * __a);  
    uint8x16x3_t vld3q_u8 (const uint8_t * __a);  
    uint16x8x3_t vld3q_u16 (const uint16_t * __a);  
    uint32x4x3_t vld3q_u32 (const uint32_t * __a);  
    poly8x16x3_t vld3q_p8 (const poly8_t * __a);  
    poly16x8x3_t vld3q_p16 (const poly16_t * __a);  
    /*--8、Load a single lane of 3-element structure from memory: vld3 ->  
    loads three elements in a triple-vector structure from memory and returns this in the 
    result. The loaded values are from consecutive memory addresses.  
    Elements in the structure that are not loaded are returned in the result unaltered. 
    c is the index of the element to load.--*/  
    int8x8x3_t vld3_lane_s8 (const int8_t * __a, int8x8x3_t __b, const int __c);  
    int16x4x3_t vld3_lane_s16 (const int16_t * __a, int16x4x3_t __b, const int __c);  
    int32x2x3_t vld3_lane_s32 (const int32_t * __a, int32x2x3_t __b, const int __c);  
    float32x2x3_t vld3_lane_f32 (const float32_t * __a, float32x2x3_t __b, const int __c);  
    uint8x8x3_t vld3_lane_u8 (const uint8_t * __a, uint8x8x3_t __b, const int __c);  
    uint16x4x3_t vld3_lane_u16 (const uint16_t * __a, uint16x4x3_t __b, const int __c);  
    uint32x2x3_t vld3_lane_u32 (const uint32_t * __a, uint32x2x3_t __b, const int __c);  
    poly8x8x3_t vld3_lane_p8 (const poly8_t * __a, poly8x8x3_t __b, const int __c);  
    poly16x4x3_t vld3_lane_p16 (const poly16_t * __a, poly16x4x3_t __b, const int __c);  
    int16x8x3_t vld3q_lane_s16 (const int16_t * __a, int16x8x3_t __b, const int __c);  
    int32x4x3_t vld3q_lane_s32 (const int32_t * __a, int32x4x3_t __b, const int __c);  
    float32x4x3_t vld3q_lane_f32 (const float32_t * __a, float32x4x3_t __b, const int __c);  
    uint16x8x3_t vld3q_lane_u16 (const uint16_t * __a, uint16x8x3_t __b, const int __c);  
    uint32x4x3_t vld3q_lane_u32 (const uint32_t * __a, uint32x4x3_t __b, const int __c);  
    poly16x8x3_t vld3q_lane_p16 (const poly16_t * __a, poly16x8x3_t __b, const int __c);  
    /*--9、Load all lanes of 3-element structure with same value from memory: vld3 -> 
    loads 3 elements from memory and returns a triple-vector structure. The first element 
    is copied to all lanes of the first vector. And similarly the second and third elements  
    are copied to the second and third vectors respectively.--*/  
    int8x8x3_t vld3_dup_s8 (const int8_t * __a);  
    int16x4x3_t vld3_dup_s16 (const int16_t * __a);  
    int32x2x3_t vld3_dup_s32 (const int32_t * __a);  
    float32x2x3_t vld3_dup_f32 (const float32_t * __a);  
    uint8x8x3_t vld3_dup_u8 (const uint8_t * __a);  
    uint16x4x3_t vld3_dup_u16 (const uint16_t * __a);  
    uint32x2x3_t vld3_dup_u32 (const uint32_t * __a);  
    poly8x8x3_t vld3_dup_p8 (const poly8_t * __a);  
    poly16x4x3_t vld3_dup_p16 (const poly16_t * __a);  
    int64x1x3_t vld3_dup_s64 (const int64_t * __a);  
    uint64x1x3_t vld3_dup_u64 (const uint64_t * __a);  
    /*--10、Load 4-element structure from memory: vld4 ->  
    loads 4 vectors from memory.  
    It performs a 4-way de-interleave from memory to the vectors.--*/  
    int8x8x4_t vld4_s8 (const int8_t * __a);  
    int16x4x4_t vld4_s16 (const int16_t * __a);  
    int32x2x4_t vld4_s32 (const int32_t * __a);  
    float32x2x4_t vld4_f32 (const float32_t * __a);  
    uint8x8x4_t  vld4_u8 (const uint8_t * __a);  
    uint16x4x4_t vld4_u16 (const uint16_t * __a);  
    uint32x2x4_t vld4_u32 (const uint32_t * __a);  
    poly8x8x4_t vld4_p8 (const poly8_t * __a);  
    poly16x4x4_t vld4_p16 (const poly16_t * __a);  
    int64x1x4_t vld4_s64 (const int64_t * __a);  
    uint64x1x4_t vld4_u64 (const uint64_t * __a);  
    int8x16x4_t vld4q_s8 (const int8_t * __a);  
    int16x8x4_t vld4q_s16 (const int16_t * __a);  
    int32x4x4_t vld4q_s32 (const int32_t * __a);  
    float32x4x4_t vld4q_f32 (const float32_t * __a);  
    uint8x16x4_t vld4q_u8 (const uint8_t * __a);  
    uint16x8x4_t vld4q_u16 (const uint16_t * __a);  
    uint32x4x4_t vld4q_u32 (const uint32_t * __a);  
    poly8x16x4_t vld4q_p8 (const poly8_t * __a);  
    poly16x8x4_t vld4q_p16 (const poly16_t * __a);  
    /*--11、Load a single lane of 4-element structure from memory: vld4 ->  
    loads four elements in a quad-vector structure from memory and returns this in the result.  
    The loaded values are from consecutive memory addresses. 
    Elements in the structure that are not loaded are returned in the result unaltered.  
    c is the index of the element to load.--*/  
    int8x8x4_t vld4_lane_s8 (const int8_t * __a, int8x8x4_t __b, const int __c);  
    int16x4x4_t vld4_lane_s16 (const int16_t * __a, int16x4x4_t __b, const int __c);  
    int32x2x4_t vld4_lane_s32 (const int32_t * __a, int32x2x4_t __b, const int __c);  
    float32x2x4_t vld4_lane_f32 (const float32_t * __a, float32x2x4_t __b, const int __c);  
    uint8x8x4_t vld4_lane_u8 (const uint8_t * __a, uint8x8x4_t __b, const int __c);  
    uint16x4x4_t vld4_lane_u16 (const uint16_t * __a, uint16x4x4_t __b, const int __c);  
    uint32x2x4_t vld4_lane_u32 (const uint32_t * __a, uint32x2x4_t __b, const int __c);  
    poly8x8x4_t vld4_lane_p8 (const poly8_t * __a, poly8x8x4_t __b, const int __c);  
    poly16x4x4_t vld4_lane_p16 (const poly16_t * __a, poly16x4x4_t __b, const int __c);  
    int16x8x4_t vld4q_lane_s16 (const int16_t * __a, int16x8x4_t __b, const int __c);  
    int32x4x4_t vld4q_lane_s32 (const int32_t * __a, int32x4x4_t __b, const int __c);  
    float32x4x4_t vld4q_lane_f32 (const float32_t * __a, float32x4x4_t __b, const int __c);  
    uint16x8x4_t vld4q_lane_u16 (const uint16_t * __a, uint16x8x4_t __b, const int __c);  
    uint32x4x4_t vld4q_lane_u32 (const uint32_t * __a, uint32x4x4_t __b, const int __c);  
    poly16x8x4_t vld4q_lane_p16 (const poly16_t * __a, poly16x8x4_t __b, const int __c);  
    /*--12、Load all lanes of 4-element structure with same value from memory: vld4 -> 
    loads 4 elements from memory and returns a quad-vector structure. The first element is  
    copied to all lanes of the first vector. And similarly the second, third, and fourth  
    elements are copied to the second, third, and fourth vectors respectively.--*/  
    int8x8x4_t vld4_dup_s8 (const int8_t * __a);  
    int16x4x4_t vld4_dup_s16 (const int16_t * __a);  
    int32x2x4_t vld4_dup_s32 (const int32_t * __a);  
    float32x2x4_t vld4_dup_f32 (const float32_t * __a);  
    uint8x8x4_t vld4_dup_u8 (const uint8_t * __a);  
    uint16x4x4_t vld4_dup_u16 (const uint16_t * __a);  
    uint32x2x4_t vld4_dup_u32 (const uint32_t * __a);  
    poly8x8x4_t vld4_dup_p8 (const poly8_t * __a);  
    poly16x4x4_t vld4_dup_p16 (const poly16_t * __a);  
    int64x1x4_t vld4_dup_s64 (const int64_t * __a);  
    uint64x1x4_t vld4_dup_u64 (const uint64_t * __a);  
    /*****************************************************Store*****************************/  
    /*--1、Store a single vector into memory: vst1 -> stores a vector into memory.--*/  
    void vst1_s8 (int8_t * __a, int8x8_t __b);  
    void vst1_s16 (int16_t * __a, int16x4_t __b);  
    void vst1_s32 (int32_t * __a, int32x2_t __b);  
    void vst1_s64 (int64_t * __a, int64x1_t __b);  
    void vst1_f32 (float32_t * __a, float32x2_t __b);  
    void vst1_u8 (uint8_t * __a, uint8x8_t __b);  
    void vst1_u16 (uint16_t * __a, uint16x4_t __b);  
    void vst1_u32 (uint32_t * __a, uint32x2_t __b);  
    void vst1_u64 (uint64_t * __a, uint64x1_t __b);  
    void vst1_p8 (poly8_t * __a, poly8x8_t __b);  
    void vst1_p16 (poly16_t * __a, poly16x4_t __b);  
    void vst1q_s8 (int8_t * __a, int8x16_t __b);  
    void vst1q_s16 (int16_t * __a, int16x8_t __b);  
    void vst1q_s32 (int32_t * __a, int32x4_t __b);  
    void vst1q_s64 (int64_t * __a, int64x2_t __b);  
    void vst1q_f32 (float32_t * __a, float32x4_t __b);  
    void vst1q_u8 (uint8_t * __a, uint8x16_t __b);  
    void vst1q_u16 (uint16_t * __a, uint16x8_t __b);  
    void vst1q_u32 (uint32_t * __a, uint32x4_t __b);  
    void vst1q_u64 (uint64_t * __a, uint64x2_t __b);  
    void vst1q_p8 (poly8_t * __a, poly8x16_t __b);  
    void vst1q_p16 (poly16_t * __a, poly16x8_t __b);  
    /*--2、Store a single lane into memory: vst1 ->  
    stores one element of the vector into memory.  
    c is the index in the vector to be stored.--*/  
    void vst1_lane_s8 (int8_t * __a, int8x8_t __b, const int __c);  
    void vst1_lane_s16 (int16_t * __a, int16x4_t __b, const int __c);  
    void vst1_lane_s32 (int32_t * __a, int32x2_t __b, const int __c);  
    void vst1_lane_f32 (float32_t * __a, float32x2_t __b, const int __c);  
    void vst1_lane_u8 (uint8_t * __a, uint8x8_t __b, const int __c);  
    void vst1_lane_u16 (uint16_t * __a, uint16x4_t __b, const int __c);  
    void vst1_lane_u32 (uint32_t * __a, uint32x2_t __b, const int __c);  
    void vst1_lane_p8 (poly8_t * __a, poly8x8_t __b, const int __c);  
    void vst1_lane_p16 (poly16_t * __a, poly16x4_t __b, const int __c);  
    void vst1_lane_s64 (int64_t * __a, int64x1_t __b, const int __c);  
    void vst1_lane_u64 (uint64_t * __a, uint64x1_t __b, const int __c);  
    void vst1q_lane_s8 (int8_t * __a, int8x16_t __b, const int __c);  
    void vst1q_lane_s16 (int16_t * __a, int16x8_t __b, const int __c);  
    void vst1q_lane_s32 (int32_t * __a, int32x4_t __b, const int __c);  
    void vst1q_lane_f32 (float32_t * __a, float32x4_t __b, const int __c);  
    void vst1q_lane_u8 (uint8_t * __a, uint8x16_t __b, const int __c);  
    void vst1q_lane_u16 (uint16_t * __a, uint16x8_t __b, const int __c);  
    void vst1q_lane_u32 (uint32_t * __a, uint32x4_t __b, const int __c);  
    void vst1q_lane_p8 (poly8_t * __a, poly8x16_t __b, const int __c);  
    void vst1q_lane_p16 (poly16_t * __a, poly16x8_t __b, const int __c);  
    void vst1q_lane_s64 (int64_t * __a, int64x2_t __b, const int __c);  
    void vst1q_lane_u64 (uint64_t * __a, uint64x2_t __b, const int __c);  
    /*--3、Store 2 vectors into memory: vst2 ->  
    stores 2 vectors into memory. It interleaves the 2 vectors into memory.--*/  
    void vst2_s8 (int8_t * __a, int8x8x2_t __b);  
    void vst2_s16 (int16_t * __a, int16x4x2_t __b);  
    void vst2_s32 (int32_t * __a, int32x2x2_t __b);  
    void vst2_f32 (float32_t * __a, float32x2x2_t __b);  
    void vst2_u8 (uint8_t * __a, uint8x8x2_t __b);  
    void vst2_u16 (uint16_t * __a, uint16x4x2_t __b);  
    void vst2_u32 (uint32_t * __a, uint32x2x2_t __b);  
    void vst2_p8 (poly8_t * __a, poly8x8x2_t __b);  
    void vst2_p16 (poly16_t * __a, poly16x4x2_t __b);  
    void vst2_s64 (int64_t * __a, int64x1x2_t __b);  
    void vst2_u64 (uint64_t * __a, uint64x1x2_t __b);  
    void vst2q_s8 (int8_t * __a, int8x16x2_t __b);  
    void vst2q_s16 (int16_t * __a, int16x8x2_t __b);  
    void vst2q_s32 (int32_t * __a, int32x4x2_t __b);  
    void vst2q_f32 (float32_t * __a, float32x4x2_t __b);  
    void vst2q_u8 (uint8_t * __a, uint8x16x2_t __b);  
    void vst2q_u16 (uint16_t * __a, uint16x8x2_t __b);  
    void vst2q_u32 (uint32_t * __a, uint32x4x2_t __b);  
    void vst2q_p8 (poly8_t * __a, poly8x16x2_t __b);  
    void vst2q_p16 (poly16_t * __a, poly16x8x2_t __b);  
    /*--4、Store a lane of two elements into memory: vst2 -> 
    stores a lane of two elements from a double-vector structure into memory. 
    The elements to be stored are from the same lane in the vectors and their index is c.--*/  
    void vst2_lane_s8 (int8_t * __a, int8x8x2_t __b, const int __c);  
    void vst2_lane_s16 (int16_t * __a, int16x4x2_t __b, const int __c);  
    void vst2_lane_s32 (int32_t * __a, int32x2x2_t __b, const int __c);  
    void vst2_lane_f32 (float32_t * __a, float32x2x2_t __b, const int __c);  
    void vst2_lane_u8 (uint8_t * __a, uint8x8x2_t __b, const int __c);  
    void vst2_lane_u16 (uint16_t * __a, uint16x4x2_t __b, const int __c);  
    void vst2_lane_u32 (uint32_t * __a, uint32x2x2_t __b, const int __c);  
    void vst2_lane_p8 (poly8_t * __a, poly8x8x2_t __b, const int __c);  
    void vst2_lane_p16 (poly16_t * __a, poly16x4x2_t __b, const int __c);  
    void vst2q_lane_s16 (int16_t * __a, int16x8x2_t __b, const int __c);  
    void vst2q_lane_s32 (int32_t * __a, int32x4x2_t __b, const int __c);  
    void vst2q_lane_f32 (float32_t * __a, float32x4x2_t __b, const int __c);  
    void vst2q_lane_u16 (uint16_t * __a, uint16x8x2_t __b, const int __c);  
    void vst2q_lane_u32 (uint32_t * __a, uint32x4x2_t __b, const int __c);  
    void vst2q_lane_p16 (poly16_t * __a, poly16x8x2_t __b, const int __c);  
    /*--5、Store 3 vectors into memory: vst3 ->  
    stores 3 vectors into memory. It interleaves the 3 vectors into memory.--*/  
    void vst3_s8 (int8_t * __a, int8x8x3_t __b);  
    void vst3_s16 (int16_t * __a, int16x4x3_t __b);  
    void vst3_s32 (int32_t * __a, int32x2x3_t __b);  
    void vst3_f32 (float32_t * __a, float32x2x3_t __b);  
    void  vst3_u8 (uint8_t * __a, uint8x8x3_t __b);  
    void vst3_u16 (uint16_t * __a, uint16x4x3_t __b);  
    void vst3_u32 (uint32_t * __a, uint32x2x3_t __b);  
    void vst3_p8 (poly8_t * __a, poly8x8x3_t __b);  
    void vst3_p16 (poly16_t * __a, poly16x4x3_t __b);  
    void vst3_s64 (int64_t * __a, int64x1x3_t __b);  
    void vst3_u64 (uint64_t * __a, uint64x1x3_t __b);  
    void vst3q_s8 (int8_t * __a, int8x16x3_t __b);  
    void vst3q_s16 (int16_t * __a, int16x8x3_t __b);  
    void vst3q_s32 (int32_t * __a, int32x4x3_t __b);  
    void vst3q_f32 (float32_t * __a, float32x4x3_t __b);  
    void vst3q_u8 (uint8_t * __a, uint8x16x3_t __b);  
    void vst3q_u16 (uint16_t * __a, uint16x8x3_t __b);  
    void vst3q_u32 (uint32_t * __a, uint32x4x3_t __b);  
    void vst3q_p8 (poly8_t * __a, poly8x16x3_t __b);  
    void vst3q_p16 (poly16_t * __a, poly16x8x3_t __b);  
    /*--6、Store a lane of three elements into memory: vst3 -> 
    stores a lane of three elements from a triple-vector structure into memory.  
    The elements to be stored are from the same lane in the vectors and their index is c.--*/  
    void vst3_lane_s8 (int8_t * __a, int8x8x3_t __b, const int __c);  
    void vst3_lane_s16 (int16_t * __a, int16x4x3_t __b, const int __c);  
    void vst3_lane_s32 (int32_t * __a, int32x2x3_t __b, const int __c);  
    void vst3_lane_f32 (float32_t * __a, float32x2x3_t __b, const int __c);  
    void vst3_lane_u8 (uint8_t * __a, uint8x8x3_t __b, const int __c);  
    void vst3_lane_u16 (uint16_t * __a, uint16x4x3_t __b, const int __c);  
    void vst3_lane_u32 (uint32_t * __a, uint32x2x3_t __b, const int __c);  
    void vst3_lane_p8 (poly8_t * __a, poly8x8x3_t __b, const int __c);  
    void vst3_lane_p16 (poly16_t * __a, poly16x4x3_t __b, const int __c);  
    void vst3q_lane_s16 (int16_t * __a, int16x8x3_t __b, const int __c);  
    void vst3q_lane_s32 (int32_t * __a, int32x4x3_t __b, const int __c);  
    void vst3q_lane_f32 (float32_t * __a, float32x4x3_t __b, const int __c);  
    void vst3q_lane_u16 (uint16_t * __a, uint16x8x3_t __b, const int __c);  
    void vst3q_lane_u32 (uint32_t * __a, uint32x4x3_t __b, const int __c);  
    void vst3q_lane_p16 (poly16_t * __a, poly16x8x3_t __b, const int __c);  
    /*--7、Store 4 vectors into memory: vst4 ->  
    stores 4 vectors into memory. It interleaves the 4 vectors into memory.--*/  
    void vst4_s8 (int8_t * __a, int8x8x4_t __b);  
    void vst4_s16 (int16_t * __a, int16x4x4_t __b);  
    void vst4_s32 (int32_t * __a, int32x2x4_t __b);  
    void vst4_f32 (float32_t * __a, float32x2x4_t __b);  
    void vst4_u8 (uint8_t * __a, uint8x8x4_t __b);  
    void vst4_u16 (uint16_t * __a, uint16x4x4_t __b);  
    void vst4_u32 (uint32_t * __a, uint32x2x4_t __b);  
    void vst4_p8 (poly8_t * __a, poly8x8x4_t __b);  
    void vst4_p16 (poly16_t * __a, poly16x4x4_t __b);  
    void vst4_s64 (int64_t * __a, int64x1x4_t __b);  
    void vst4_u64 (uint64_t * __a, uint64x1x4_t __b);  
    void vst4q_s8 (int8_t * __a, int8x16x4_t __b);  
    void vst4q_s16 (int16_t * __a, int16x8x4_t __b);  
    void vst4q_s32 (int32_t * __a, int32x4x4_t __b);  
    void  vst4q_f32 (float32_t * __a, float32x4x4_t __b);  
    void vst4q_u8 (uint8_t * __a, uint8x16x4_t __b);  
    void vst4q_u16 (uint16_t * __a, uint16x8x4_t __b);  
    void vst4q_u32 (uint32_t * __a, uint32x4x4_t __b);  
    void vst4q_p8 (poly8_t * __a, poly8x16x4_t __b);  
    void vst4q_p16 (poly16_t * __a, poly16x8x4_t __b);  
    /*--8、Store a lane of four elements into memory: vst4 -> 
    stores a lane of four elements from a quad-vector structure into memory. 
    The elements to be stored are from the same lane in the vectors and their index is c.--*/  
    void vst4_lane_s8 (int8_t * __a, int8x8x4_t __b, const int __c);  
    void vst4_lane_s16 (int16_t * __a, int16x4x4_t __b, const int __c);  
    void vst4_lane_s32 (int32_t * __a, int32x2x4_t __b, const int __c);  
    void vst4_lane_f32 (float32_t * __a, float32x2x4_t __b, const int __c);  
    void vst4_lane_u8 (uint8_t * __a, uint8x8x4_t __b, const int __c);  
    void vst4_lane_u16 (uint16_t * __a, uint16x4x4_t __b, const int __c);  
    void vst4_lane_u32 (uint32_t * __a, uint32x2x4_t __b, const int __c);  
    void vst4_lane_p8 (poly8_t * __a, poly8x8x4_t __b, const int __c);  
    void vst4_lane_p16 (poly16_t * __a, poly16x4x4_t __b, const int __c);  
    void vst4q_lane_s16 (int16_t * __a, int16x8x4_t __b, const int __c);  
    void vst4q_lane_s32 (int32_t * __a, int32x4x4_t __b, const int __c);  
    void vst4q_lane_f32 (float32_t * __a, float32x4x4_t __b, const int __c);  
    void vst4q_lane_u16 (uint16_t * __a, uint16x8x4_t __b, const int __c);  
    void vst4q_lane_u32 (uint32_t * __a, uint32x4x4_t __b, const int __c);  
    void vst4q_lane_p16 (poly16_t * __a, poly16x8x4_t __b, const int __c);  
    /*********************************Reinterpret casts(type conversion)********************/  
    /*--convert between types: vreinterpret -> treats a vector as having a different  
    datatype, without changing its value.--*/  
    poly8x8_t vreinterpret_p8_s8 (int8x8_t __a);  
    poly8x8_t vreinterpret_p8_s16 (int16x4_t __a);  
    poly8x8_t vreinterpret_p8_s32 (int32x2_t __a);  
    poly8x8_t vreinterpret_p8_s64 (int64x1_t __a);  
    poly8x8_t vreinterpret_p8_f32 (float32x2_t __a);  
    poly8x8_t vreinterpret_p8_u8 (uint8x8_t __a);  
    poly8x8_t vreinterpret_p8_u16 (uint16x4_t __a);  
    poly8x8_t vreinterpret_p8_u32 (uint32x2_t __a);  
    poly8x8_t vreinterpret_p8_u64 (uint64x1_t __a);  
    poly8x8_t vreinterpret_p8_p16 (poly16x4_t __a);  
    poly8x16_t vreinterpretq_p8_s8 (int8x16_t __a);  
    poly8x16_t vreinterpretq_p8_s16 (int16x8_t __a);  
    poly8x16_t vreinterpretq_p8_s32 (int32x4_t __a);  
    poly8x16_t vreinterpretq_p8_s64 (int64x2_t __a);  
    poly8x16_t vreinterpretq_p8_f32 (float32x4_t __a);  
    poly8x16_t vreinterpretq_p8_u8 (uint8x16_t __a);  
    poly8x16_t vreinterpretq_p8_u16 (uint16x8_t __a);  
    poly8x16_t vreinterpretq_p8_u32 (uint32x4_t __a);  
    poly8x16_t vreinterpretq_p8_u64 (uint64x2_t __a);  
    poly8x16_t vreinterpretq_p8_p16 (poly16x8_t __a);  
    poly16x4_t vreinterpret_p16_s8 (int8x8_t __a);  
    poly16x4_t vreinterpret_p16_s16 (int16x4_t __a);  
    poly16x4_t vreinterpret_p16_s32 (int32x2_t __a);  
    poly16x4_t vreinterpret_p16_s64 (int64x1_t __a);  
    poly16x4_t vreinterpret_p16_f32 (float32x2_t __a);  
    poly16x4_t vreinterpret_p16_u8 (uint8x8_t __a);  
    poly16x4_t vreinterpret_p16_u16 (uint16x4_t __a);  
    poly16x4_t vreinterpret_p16_u32 (uint32x2_t __a);  
    poly16x4_t vreinterpret_p16_u64 (uint64x1_t __a);  
    poly16x4_t vreinterpret_p16_p8 (poly8x8_t __a);  
    poly16x8_t vreinterpretq_p16_s8 (int8x16_t __a);  
    poly16x8_t vreinterpretq_p16_s16 (int16x8_t __a);  
    poly16x8_t vreinterpretq_p16_s32 (int32x4_t __a);  
    poly16x8_t vreinterpretq_p16_s64 (int64x2_t __a);  
    poly16x8_t vreinterpretq_p16_f32 (float32x4_t __a);  
    poly16x8_t vreinterpretq_p16_u8 (uint8x16_t __a);  
    poly16x8_t vreinterpretq_p16_u16 (uint16x8_t __a);  
    poly16x8_t vreinterpretq_p16_u32 (uint32x4_t __a);  
    poly16x8_t vreinterpretq_p16_u64 (uint64x2_t __a);  
    poly16x8_t vreinterpretq_p16_p8 (poly8x16_t __a);  
    float32x2_t vreinterpret_f32_s8 (int8x8_t __a);  
    float32x2_t vreinterpret_f32_s16 (int16x4_t __a);  
    float32x2_t vreinterpret_f32_s32 (int32x2_t __a);  
    float32x2_t vreinterpret_f32_s64 (int64x1_t __a);  
    float32x2_t vreinterpret_f32_u8 (uint8x8_t __a);  
    float32x2_t vreinterpret_f32_u16 (uint16x4_t __a);  
    float32x2_t vreinterpret_f32_u32 (uint32x2_t __a);  
    float32x2_t vreinterpret_f32_u64 (uint64x1_t __a);  
    float32x2_t vreinterpret_f32_p8 (poly8x8_t __a);  
    float32x2_t vreinterpret_f32_p16 (poly16x4_t __a);  
    float32x4_t vreinterpretq_f32_s8 (int8x16_t __a);  
    float32x4_t vreinterpretq_f32_s16 (int16x8_t __a);  
    float32x4_t vreinterpretq_f32_s32 (int32x4_t __a);  
    float32x4_t vreinterpretq_f32_s64 (int64x2_t __a);  
    float32x4_t vreinterpretq_f32_u8 (uint8x16_t __a);  
    float32x4_t vreinterpretq_f32_u16 (uint16x8_t __a);  
    float32x4_t vreinterpretq_f32_u32 (uint32x4_t __a);  
    float32x4_t vreinterpretq_f32_u64 (uint64x2_t __a);  
    float32x4_t vreinterpretq_f32_p8 (poly8x16_t __a);  
    float32x4_t vreinterpretq_f32_p16 (poly16x8_t __a);  
    int64x1_t vreinterpret_s64_s8 (int8x8_t __a);  
    int64x1_t vreinterpret_s64_s16 (int16x4_t __a);  
    int64x1_t vreinterpret_s64_s32 (int32x2_t __a);  
    int64x1_t vreinterpret_s64_f32 (float32x2_t __a);  
    int64x1_t vreinterpret_s64_u8 (uint8x8_t __a);  
    int64x1_t vreinterpret_s64_u16 (uint16x4_t __a);  
    int64x1_t vreinterpret_s64_u32 (uint32x2_t __a);  
    int64x1_t vreinterpret_s64_u64 (uint64x1_t __a);  
    int64x1_t vreinterpret_s64_p8 (poly8x8_t __a);  
    int64x1_t vreinterpret_s64_p16 (poly16x4_t __a);  
    int64x2_t vreinterpretq_s64_s8 (int8x16_t __a);  
    int64x2_t vreinterpretq_s64_s16 (int16x8_t __a);  
    int64x2_t vreinterpretq_s64_s32 (int32x4_t __a);  
    int64x2_t vreinterpretq_s64_f32 (float32x4_t __a);  
    int64x2_t vreinterpretq_s64_u8 (uint8x16_t __a);  
    int64x2_t vreinterpretq_s64_u16 (uint16x8_t __a);  
    int64x2_t vreinterpretq_s64_u32 (uint32x4_t __a);  
    int64x2_t vreinterpretq_s64_u64 (uint64x2_t __a);  
    int64x2_t vreinterpretq_s64_p8 (poly8x16_t __a);  
    int64x2_t vreinterpretq_s64_p16 (poly16x8_t __a);  
    uint64x1_t vreinterpret_u64_s8 (int8x8_t __a);  
    uint64x1_t vreinterpret_u64_s16 (int16x4_t __a);  
    uint64x1_t vreinterpret_u64_s32 (int32x2_t __a);  
    uint64x1_t vreinterpret_u64_s64 (int64x1_t __a);  
    uint64x1_t vreinterpret_u64_f32 (float32x2_t __a);  
    uint64x1_t vreinterpret_u64_u8 (uint8x8_t __a);  
    uint64x1_t vreinterpret_u64_u16 (uint16x4_t __a);  
    uint64x1_t vreinterpret_u64_u32 (uint32x2_t __a);  
    uint64x1_t vreinterpret_u64_p8 (poly8x8_t __a);  
    uint64x1_t vreinterpret_u64_p16 (poly16x4_t __a);  
    uint64x2_t vreinterpretq_u64_s8 (int8x16_t __a);  
    uint64x2_t vreinterpretq_u64_s16 (int16x8_t __a);  
    uint64x2_t vreinterpretq_u64_s32 (int32x4_t __a);  
    uint64x2_t vreinterpretq_u64_s64 (int64x2_t __a);  
    uint64x2_t vreinterpretq_u64_f32 (float32x4_t __a);  
    uint64x2_t vreinterpretq_u64_u8 (uint8x16_t __a);  
    uint64x2_t vreinterpretq_u64_u16 (uint16x8_t __a);  
    uint64x2_t vreinterpretq_u64_u32 (uint32x4_t __a);  
    uint64x2_t vreinterpretq_u64_p8 (poly8x16_t __a);  
    uint64x2_t vreinterpretq_u64_p16 (poly16x8_t __a);  
    int8x8_t vreinterpret_s8_s16 (int16x4_t __a);  
    int8x8_t vreinterpret_s8_s32 (int32x2_t __a);  
    int8x8_t vreinterpret_s8_s64 (int64x1_t __a);  
    int8x8_t vreinterpret_s8_f32 (float32x2_t __a);  
    int8x8_t vreinterpret_s8_u8 (uint8x8_t __a);  
    int8x8_t vreinterpret_s8_u16 (uint16x4_t __a);  
    int8x8_t vreinterpret_s8_u32 (uint32x2_t __a);  
    int8x8_t vreinterpret_s8_u64 (uint64x1_t __a);  
    int8x8_t vreinterpret_s8_p8 (poly8x8_t __a);  
    int8x8_t vreinterpret_s8_p16 (poly16x4_t __a);  
    int8x16_t vreinterpretq_s8_s16 (int16x8_t __a);  
    int8x16_t vreinterpretq_s8_s32 (int32x4_t __a);  
    int8x16_t vreinterpretq_s8_s64 (int64x2_t __a);  
    int8x16_t vreinterpretq_s8_f32 (float32x4_t __a);  
    int8x16_t vreinterpretq_s8_u8 (uint8x16_t __a);  
    int8x16_t vreinterpretq_s8_u16 (uint16x8_t __a);  
    int8x16_t vreinterpretq_s8_u32 (uint32x4_t __a);  
    int8x16_t vreinterpretq_s8_u64 (uint64x2_t __a);  
    int8x16_t vreinterpretq_s8_p8 (poly8x16_t __a);  
    int8x16_t vreinterpretq_s8_p16 (poly16x8_t __a);  
    int16x4_t vreinterpret_s16_s8 (int8x8_t __a);  
    int16x4_t vreinterpret_s16_s32 (int32x2_t __a);  
    int16x4_t vreinterpret_s16_s64 (int64x1_t __a);  
    int16x4_t vreinterpret_s16_f32 (float32x2_t __a);  
    int16x4_t vreinterpret_s16_u8 (uint8x8_t __a);  
    int16x4_t vreinterpret_s16_u16 (uint16x4_t __a);  
    int16x4_t vreinterpret_s16_u32 (uint32x2_t __a);  
    int16x4_t vreinterpret_s16_u64 (uint64x1_t __a);  
    int16x4_t vreinterpret_s16_p8 (poly8x8_t __a);  
    int16x4_t vreinterpret_s16_p16 (poly16x4_t __a);  
    int16x8_t vreinterpretq_s16_s8 (int8x16_t __a);  
    int16x8_t vreinterpretq_s16_s32 (int32x4_t __a);  
    int16x8_t vreinterpretq_s16_s64 (int64x2_t __a);  
    int16x8_t vreinterpretq_s16_f32 (float32x4_t __a);  
    int16x8_t vreinterpretq_s16_u8 (uint8x16_t __a);  
    int16x8_t vreinterpretq_s16_u16 (uint16x8_t __a);  
    int16x8_t vreinterpretq_s16_u32 (uint32x4_t __a);  
    int16x8_t vreinterpretq_s16_u64 (uint64x2_t __a);  
    int16x8_t vreinterpretq_s16_p8 (poly8x16_t __a);  
    int16x8_t vreinterpretq_s16_p16 (poly16x8_t __a);  
    int32x2_t vreinterpret_s32_s8 (int8x8_t __a);  
    int32x2_t vreinterpret_s32_s16 (int16x4_t __a);  
    int32x2_t vreinterpret_s32_s64 (int64x1_t __a);  
    int32x2_t vreinterpret_s32_f32 (float32x2_t __a);  
    int32x2_t vreinterpret_s32_u8 (uint8x8_t __a);  
    int32x2_t vreinterpret_s32_u16 (uint16x4_t __a);  
    int32x2_t vreinterpret_s32_u32 (uint32x2_t __a);  
    int32x2_t vreinterpret_s32_u64 (uint64x1_t __a);  
    int32x2_t vreinterpret_s32_p8 (poly8x8_t __a);  
    int32x2_t vreinterpret_s32_p16 (poly16x4_t __a);  
    int32x4_t vreinterpretq_s32_s8 (int8x16_t __a);  
    int32x4_t vreinterpretq_s32_s16 (int16x8_t __a);  
    int32x4_t vreinterpretq_s32_s64 (int64x2_t __a);  
    int32x4_t vreinterpretq_s32_f32 (float32x4_t __a);  
    int32x4_t vreinterpretq_s32_u8 (uint8x16_t __a);  
    int32x4_t vreinterpretq_s32_u16 (uint16x8_t __a);  
    int32x4_t vreinterpretq_s32_u32 (uint32x4_t __a);  
    int32x4_t vreinterpretq_s32_u64 (uint64x2_t __a);  
    int32x4_t vreinterpretq_s32_p8 (poly8x16_t __a);  
    int32x4_t vreinterpretq_s32_p16 (poly16x8_t __a);  
    uint8x8_t vreinterpret_u8_s8 (int8x8_t __a);  
    uint8x8_t vreinterpret_u8_s16 (int16x4_t __a);  
    uint8x8_t vreinterpret_u8_s32 (int32x2_t __a);  
    uint8x8_t vreinterpret_u8_s64 (int64x1_t __a);  
    uint8x8_t vreinterpret_u8_f32 (float32x2_t __a);  
    uint8x8_t vreinterpret_u8_u16 (uint16x4_t __a);  
    uint8x8_t vreinterpret_u8_u32 (uint32x2_t __a);  
    uint8x8_t vreinterpret_u8_u64 (uint64x1_t __a);  
    uint8x8_t vreinterpret_u8_p8 (poly8x8_t __a);  
    uint8x8_t vreinterpret_u8_p16 (poly16x4_t __a);  
    uint8x16_t vreinterpretq_u8_s8 (int8x16_t __a);  
    uint8x16_t vreinterpretq_u8_s16 (int16x8_t __a);  
    uint8x16_t vreinterpretq_u8_s32 (int32x4_t __a);  
    uint8x16_t vreinterpretq_u8_s64 (int64x2_t __a);  
    uint8x16_t vreinterpretq_u8_f32 (float32x4_t __a);  
    uint8x16_t vreinterpretq_u8_u16 (uint16x8_t __a);  
    uint8x16_t vreinterpretq_u8_u32 (uint32x4_t __a);  
    uint8x16_t vreinterpretq_u8_u64 (uint64x2_t __a);  
    uint8x16_t vreinterpretq_u8_p8 (poly8x16_t __a);  
    uint8x16_t vreinterpretq_u8_p16 (poly16x8_t __a);  
    uint16x4_t vreinterpret_u16_s8 (int8x8_t __a);  
    uint16x4_t vreinterpret_u16_s16 (int16x4_t __a);  
    uint16x4_t vreinterpret_u16_s32 (int32x2_t __a);  
    uint16x4_t vreinterpret_u16_s64 (int64x1_t __a);  
    uint16x4_t vreinterpret_u16_f32 (float32x2_t __a);  
    uint16x4_t vreinterpret_u16_u8 (uint8x8_t __a);  
    uint16x4_t vreinterpret_u16_u32 (uint32x2_t __a);  
    uint16x4_t vreinterpret_u16_u64 (uint64x1_t __a);  
    uint16x4_t vreinterpret_u16_p8 (poly8x8_t __a);  
    uint16x4_t vreinterpret_u16_p16 (poly16x4_t __a);  
    uint16x8_t vreinterpretq_u16_s8 (int8x16_t __a);  
    uint16x8_t vreinterpretq_u16_s16 (int16x8_t __a);  
    uint16x8_t vreinterpretq_u16_s32 (int32x4_t __a);  
    uint16x8_t vreinterpretq_u16_s64 (int64x2_t __a);  
    uint16x8_t vreinterpretq_u16_f32 (float32x4_t __a);  
    uint16x8_t vreinterpretq_u16_u8 (uint8x16_t __a);  
    uint16x8_t vreinterpretq_u16_u32 (uint32x4_t __a);  
    uint16x8_t vreinterpretq_u16_u64 (uint64x2_t __a);  
    uint16x8_t vreinterpretq_u16_p8 (poly8x16_t __a);  
    uint16x8_t vreinterpretq_u16_p16 (poly16x8_t __a);  
    uint32x2_t vreinterpret_u32_s8 (int8x8_t __a);  
    uint32x2_t vreinterpret_u32_s16 (int16x4_t __a);  
    uint32x2_t vreinterpret_u32_s32 (int32x2_t __a);  
    uint32x2_t vreinterpret_u32_s64 (int64x1_t __a);  
    uint32x2_t vreinterpret_u32_f32 (float32x2_t __a);  
    uint32x2_t vreinterpret_u32_u8 (uint8x8_t __a);  
    uint32x2_t vreinterpret_u32_u16 (uint16x4_t __a);  
    uint32x2_t vreinterpret_u32_u64 (uint64x1_t __a);  
    uint32x2_t vreinterpret_u32_p8 (poly8x8_t __a);  
    uint32x2_t vreinterpret_u32_p16 (poly16x4_t __a);  
    uint32x4_t vreinterpretq_u32_s8 (int8x16_t __a);  
    uint32x4_t vreinterpretq_u32_s16 (int16x8_t __a);  
    uint32x4_t vreinterpretq_u32_s32 (int32x4_t __a);  
    uint32x4_t vreinterpretq_u32_s64 (int64x2_t __a);  
    uint32x4_t vreinterpretq_u32_f32 (float32x4_t __a);  
    uint32x4_t vreinterpretq_u32_u8 (uint8x16_t __a);  
    uint32x4_t vreinterpretq_u32_u16 (uint16x8_t __a);  
    uint32x4_t vreinterpretq_u32_u64 (uint64x2_t __a);  
    uint32x4_t vreinterpretq_u32_p8 (poly8x16_t __a);  
    uint32x4_t vreinterpretq_u32_p16 (poly16x8_t __a);  
    






    展开全文
  • ARM NEON 编程系列2 - 基本指令集 前言 本系列博文用于介绍ARM CPU下NEON指令优化。 博文github地址:github 相关代码github地址:github NEON指令集 主流支持目标平台为ARM CPU的编译器基本都支持...

    https://zhuanlan.zhihu.com/p/27334213


    ARM NEON 编程系列2 - 基本指令集

    前言

    本系列博文用于介绍ARM CPU下NEON指令优化。

    • 博文github地址:github
    • 相关代码github地址:github

    NEON指令集

    主流支持目标平台为ARM CPU的编译器基本都支持NEON指令。可以通过在代码中嵌入NEON汇编来使用NEON,但是更加常见的方式是通过类似C函数的NEON Instrinsic来编写NEON代码。就如同NEON hello world一样。NEON Instrinsic是编译器支持的一种buildin类型和函数的集合,基本涵盖NEON的所有指令,通常这些Instrinsic包含在arm_neon.h头文件中。
    本文以android-ndk-r11c中armv7的arm_neon.h为例,讲解NEON的指令类型。

    寄存器

    ARMV7架构包含:

    • 16个通用寄存器(32bit),R0-R15
    • 16个NEON寄存器(128bit),Q0-Q15(同时也可以被视为32个64bit的寄存器,D0-D31)
    • 16个VFP寄存器(32bit),S0-S15

      NEON和VFP的区别在于VFP是加速浮点计算的硬件不具备数据并行能力,同时VFP更尽兴双精度浮点数(double)的计算,NEON只有单精度浮点计算能力。更多请参考stackoverflow:neon vs vfp

    基本数据类型

    • 64bit数据类型,映射至寄存器即为D0-D31
      相应的c/c++语言类型(stdint.h或者csdtint头文件中类型)在注释中说明。

      //typedef int8_t[8] int8x8_t;
      typedef __builtin_neon_qi int8x8_t  __attribute__ ((__vector_size__ (8)));
      //typedef int16_t[4] int16x4_t;
      typedef __builtin_neon_hi int16x4_t __attribute__ ((__vector_size__ (8)));
      //typedef int32_t[2] int32x2_t;
      typedef __builtin_neon_si int32x2_t __attribute__ ((__vector_size__ (8)));
      //typedef int64_t[1] int64x1_t;
      typedef __builtin_neon_di int64x1_t;
      //typedef float16_t[4] float16x4_t;
      //(注:该类型为半精度,在部分新的CPU上支持,c/c++语言标注中尚无此基本数据类型)
      typedef __builtin_neon_hf float16x4_t   __attribute__ ((__vector_size__ (8)));
      //typedef float32_t[2] float32x2_t;
      typedef __builtin_neon_sf float32x2_t   __attribute__ ((__vector_size__ (8)));
      //poly8以及poly16类型在常用算法中基本不会使用
      //详细解释见:
      //http://stackoverflow.com/questions/22224282/arm-neon-and-poly8-t-and-poly16-t
      typedef __builtin_neon_poly8 poly8x8_t  __attribute__ ((__vector_size__ (8)));
      typedef __builtin_neon_poly16 poly16x4_t    __attribute__ ((__vector_size__ (8)));
      #ifdef __ARM_FEATURE_CRYPTO
      typedef __builtin_neon_poly64 poly64x1_t;
      #endif
      //typedef uint8_t[8] uint8x8_t;
      typedef __builtin_neon_uqi uint8x8_t    __attribute__ ((__vector_size__ (8)));
      //typedef uint16_t[4] uint16x4_t;
      typedef __builtin_neon_uhi uint16x4_t   __attribute__ ((__vector_size__ (8)));
      //typedef uint32_t[2] uint32x2_t;
      typedef __builtin_neon_usi uint32x2_t   __attribute__ ((__vector_size__ (8)));
      //typedef uint64_t[1] uint64x1_t;
      typedef __builtin_neon_udi uint64x1_t;
      
    • 128bit数据类型,映射至寄存器即为Q0-Q15
      相应的c/c++语言类型(stdint.h或者csdtint头文件中类型)在注释中说明。

      //typedef int8_t[16] int8x16_t;
      typedef __builtin_neon_qi int8x16_t __attribute__ ((__vector_size__ (16)));
      //typedef int16_t[8] int16x8_t;
      typedef __builtin_neon_hi int16x8_t __attribute__ ((__vector_size__ (16)));
      //typedef int32_t[4] int32x4_t;
      typedef __builtin_neon_si int32x4_t __attribute__ ((__vector_size__ (16)));
      //typedef int64_t[2] int64x2_t;
      typedef __builtin_neon_di int64x2_t __attribute__ ((__vector_size__ (16)));
      //typedef float32_t[4] float32x4_t;
      typedef __builtin_neon_sf float32x4_t   __attribute__ ((__vector_size__ (16)));
      //poly8以及poly16类型在常用算法中基本不会使用
      //详细解释见:
      //http://stackoverflow.com/questions/22224282/arm-neon-and-poly8-t-and-poly16-t
      typedef __builtin_neon_poly8 poly8x16_t __attribute__ ((__vector_size__ (16)));
      typedef __builtin_neon_poly16 poly16x8_t    __attribute__ ((__vector_size__ (16)));
      #ifdef __ARM_FEATURE_CRYPTO
      typedef __builtin_neon_poly64 poly64x2_t    __attribute__ ((__vector_size__ (16)));
      #endif
      //typedef uint8_t[16] uint8x16_t;
      typedef __builtin_neon_uqi uint8x16_t   __attribute__ ((__vector_size__ (16)));
      //typedef uint16_t[8] uint16x8_t;
      typedef __builtin_neon_uhi uint16x8_t   __attribute__ ((__vector_size__ (16)));
      //typedef uint32_t[4] uint32x4_t;
      typedef __builtin_neon_usi uint32x4_t   __attribute__ ((__vector_size__ (16)));
      //typedef uint64_t[2] uint64x2_t;
      typedef __builtin_neon_udi uint64x2_t   __attribute__ ((__vector_size__ (16)));
      typedef float float32_t;
      typedef __builtin_neon_poly8 poly8_t;
      typedef __builtin_neon_poly16 poly16_t;
      #ifdef __ARM_FEATURE_CRYPTO
      typedef __builtin_neon_poly64 poly64_t;
      typedef __builtin_neon_poly128 poly128_t;
      #endif
      

    结构化数据类型

    下面这些数据类型是上述基本数据类型的组合而成的结构化数据类型,通常为被映射到多个寄存器中。

    typedef struct int8x8x2_t
    {
      int8x8_t val[2];
    } int8x8x2_t;
    ...
    //省略...
    ...
    #ifdef __ARM_FEATURE_CRYPTO
    typedef struct poly64x2x4_t
    {
      poly64x2_t val[4];
    } poly64x2x4_t;
    #endif
    

    基本指令集

    NEON指令按照操作数类型可以分为正常指令、宽指令、窄指令、饱和指令、长指令。

    • 正常指令:生成大小相同且类型通常与操作数向量相同到结果向量。
    • 长指令:对双字向量操作数执行运算,生产四字向量到结果。所生成的元素一般是操作数元素宽度到两倍,并属于同一类型。L标记,如VMOVL。
    • 宽指令:一个双字向量操作数和一个四字向量操作数执行运算,生成四字向量结果。W标记,如VADDW。
    • 窄指令:四字向量操作数执行运算,并生成双字向量结果,所生成的元素一般是操作数元素宽度的一半。N标记,如VMOVN。
    • 饱和指令:当超过数据类型指定到范围则自动限制在该范围内。Q标记,如VQSHRUN

    NEON指令按照作用可以分为:加载数据、存储数据、加减乘除运算、逻辑AND/OR/XOR运算、比较大小运算等,具体信息参考资料[1]中附录C和附录D部分。

    常用的指令集包括:

    • 初始化寄存器
      寄存器的每个lane(通道)都赋值为一个值N

      Result_t vcreate_type(Scalar_t N)
      Result_t vdup_type(Scalar_t N)
      Result_t vmov_type(Scalar_t N)
      

      lane(通道)在下面有说明。

    • 加载内存数据进寄存器
      间隔为x,加载数据进NEON寄存器

      Result_t vld[x]_type(Scalar_t* N)
      Result_t vld[x]q_type(Scalar_t* N)
      

      间隔为x,加载数据进NEON寄存器的相关lane(通道),其他lane(通道)的数据不改变

      Result_t vld[x]_lane_type(Scalar_t* N,Vector_t M,int n)
      Result_t vld[x]q_lane_type(Scalar_t* N,Vector_t M,int n)
      

      从N中加载x条数据,分别duplicate(复制)数据到寄存器0-(x-1)的所有通道

      Result_t vld[x]_dup_type(Scalar_t* N)
      Result_t vld[x]q_dup_type(Scalar_t* N)
      
      • lane(通道):比如一个float32x4_t的NEON寄存器,它具有4个lane(通道),每个lane(通道)有一个float32的值,因此 c++ float32x4_t dst = vld1q_lane_f32(float32_t* ptr,float32x4_t src,int n=2) 的意思就是先将src寄存器的值复制到dst寄存器中,然后从ptr这个内存地址中加载第3个(lane的index从0开始)float到dst寄存器的第3个lane(通道中)。最后dst的值为:{src[0],src[1],ptr[2],src[3]}。
      • 间隔:交叉存取,是ARM NEON特有的指令,比如 c++ float32x4x3_t = vld3q_f32(float32_t* ptr) ,此处间隔为3,即交叉读取12个float32进3个NEON寄存器中。3个寄存器的值分别为:{ptr[0],ptr[3],ptr[6],ptr[9]},{ptr[1],ptr[4],ptr[7],ptr[10]},{ptr[2],ptr[5],ptr[8],ptr[11]}。
    • 存储寄存器数据到内存
      间隔为x,存储NEON寄存器的数据到内存中

      void vstx_type(Scalar_t* N)
      void vstxq_type(Scalar_t* N)
      

      间隔为x,存储NEON寄存器的相关lane(通道)到内存中

      Result_t vst[x]_lane_type(Scalar_t* N,Vector_t M,int n)
      Result_t vst[x]q_lane_type(Scalar_t* N,Vector_t M,int n)
      
    • 读取/修改寄存器数据
      读取寄存器第n个通道的数据

      Result_t vget_lane_type(Vector_t M,int n)
      

      读取寄存器的高/低部分到新的寄存器中,数据变窄(长度减半)。

      Result_t vget_low_type(Vector_t M)
      Result_t vget_high_type(Vector_t M)
      

      返回在复制M的基础上设置通道n为N的寄存器数据

      Result_t vset_lane_type(Scalar N,Vector_t M,int n)
      
    • 寄存器数据重排
      从寄存器M中取出后n个通道的数据置于低位,再从寄存器N中取出x-n个通道的数据置于高位,组成一个新的寄存器数据。

      Result_t vext_type(Vector_t N,Vector_t M,int n)
      Result_t vextq_type(Vector_t N,Vector_t M,int n)
      

      其他数据重排指令还有:

      vtbl_tyoe,vrev_type,vtrn_type,vzip_type,vunzip_type,vcombine ...
      等以后有时间一一讲解。

    • 类型转换指令
      强制重新解释寄存器的值类型,从SrcType转化为DstType,其内部实际值不变且总的字节数不变,举例:vreinterpret_f32_s32(int32x2_t),从int32x2_t转化为float32x2_t。

      vreinterpret_DstType_SrcType(Vector_t N)
      
    • 算数运算指令
      [普通指令] 普通加法运算 res = M+N

      Result_t vadd_type(Vector_t M,Vector_t N)
      Result_t vaddq_type(Vector_t M,Vector_t N)
      

      [长指令] 变长加法运算 res = M+N,为了防止溢出,一种做法是使用如下指令,加法结果存储到长度x2的寄存器中,如:vuint16x8_t res = vaddl_u8(uint8x8_t M,uint8x8_t N)。

      Result_t vaddl_type(Vector_t M,Vector_t N)
      

      [宽指令] 加法运算 res = M+N,第一个参数M宽度大于第二个参数N。

      Result_t vaddw_type(Vector_t M,Vector_t N)
      

      [普通指令] 加法运算 res = trunct(M+N)(溢出则截断)之后向右平移1位,即计算M和N的平均值

      Result_t vhadd_type(Vector_t M,Vector_t N)
      

      [普通指令] 加法运算 res = round(M+N)(溢出则循环)之后向右平移1位,即计算M和N的平均值

      Result_t vrhadd_type(Vector_t M,Vector_t N)
      

      [饱和指令] 饱和加法运算 res = st(M+N),如:vuint8x8_t res = vqadd_u8(uint8x8_t M,uint8x8_t N),res超出int8_t的表示范围(0,255),比如256,则设为255.

      Result_t vqadd_type(Vector_t M,Vector_t N)
      

      [窄指令] 加法运算 res = M+N,结果比参数M/N的长度小一半,如 uint8x8_t res = vaddhn_u16(uint16x8_t M,uint16x8_t N)

      Result_t vaddhn_type(Vector_t M,Vector_t N)
      

      [普通指令] 减法运算 res = M-N

      Result_t vsub_type(Vector_t M,Vector_t N)
      

      [普通指令] 乘法运算 res = M*N

      Result_t vmul_type(Vector_t M,Vector_t N)
      Result_t vmulq_type(Vector_t M,Vector_t N)
      

      [普通指令] 乘&加法运算 res = M+N*P

      Result_t vmla_type(Vector_t M,Vector_t N,Vector_t P)
      Result_t vmlaq_type(Vector_t M,Vector_t N,Vector_t P)
      

      [普通指令] 乘&减法运算 res = M-N*P

      Result_t vmls_type(Vector_t M,Vector_t N,Vector_t P)
      Result_t vmlsq_type(Vector_t M,Vector_t N,Vector_t P)
      

      类似加法运算,减法和乘法运算也有一系列变种...

    • 数据处理指令
      [普通指令] 计算绝对值 res=abs(M)

      Result_t vabs_type(Vector_t M)
      

      [普通指令] 计算负值 res=-M

      Result_t vneg_type(Vector_t M)
      

      [普通指令] 计算最大值 res=max(M,N)

      Result_t vmax_type(Vector_t M,Vector_t N)
      

      [普通指令] 计算最小值 res=min(M,N)

      Result_t vmin_type(Vector_t M,Vector_t N)
      

      ...

    • 比较指令
      [普通指令] 比较是否相等 res=mask(M == N)

      Result_t vceg_type(Vector_t M,Vector_t N)
      

      [普通指令] 比较是否大于或等于 res=mask(M >= N)

      Result_t vcge_type(Vector_t M,Vector_t N)
      

      [普通指令] 比较是否大于 res=mask(M > N)

      Result_t vcgt_type(Vector_t M,Vector_t N)
      

      [普通指令] 比较是否小于或等于 res=mask(M <= N)

      Result_t vcle_type(Vector_t M,Vector_t N)
      

      [普通指令] 比较是否小于 res=mask(M < N)

      Result_t vclt_type(Vector_t M,Vector_t N)
      

      ...

    • 归约指令
      [普通指令] 归约加法,M和N内部的元素各自相加,最后组成一个新的结果

      Result_t vpadd_type(Vector_t M,Vector_t N)
      

      [普通指令] 归约最大比较,M和N内部的元素比较得出最大值,最后组成一个新的结果

      Result_t vpmax_type(Vector_t M,Vector_t N)
      

      [普通指令] 归约最小比较,M和N内部的元素比较得出最小值,最后组成一个新的结果

      Result_t vpmin_type(Vector_t M,Vector_t N)
      





    展开全文
  • arm neon优化指令集

    2019-03-28 23:52:25
    neon最新指令集和用户手持,arm neon指令速查手册。arm 汇报neon指令手册。
  • ARM Neon 编程指导

    2016-11-04 12:47:17
    ARM neon 编程指导
  • arm neon指令集说明

    2017-08-30 11:51:11
    arm neon指令详解
  • ARM NEON 编程

    热门讨论 2012-10-25 13:30:02
    ARM neon功能使用howto文档,如何编译方能带有neon指令。
  • Arm Neon - Resources

    2020-11-02 23:18:10
    Arm Neon - Resources Neon https://developer.arm.com/architectures/instruction-sets/simd-isas/neon Arm Neon technology is an advanced Single Instruction Multiple Data (SIMD) architecture extension for ...

    Arm Neon - Resources

    Neon
    https://developer.arm.com/architectures/instruction-sets/simd-isas/neon

    Arm Neon technology is an advanced Single Instruction Multiple Data (SIMD) architecture extension for the Arm Cortex-A and Cortex-R series processors.

    Neon technology is a packed SIMD architecture. Neon registers are considered as vectors of elements of the same data type, with Neon instructions operating on multiple elements simultaneously. Multiple data types are supported by the technology, including floating-point and integer operations.
    Neon 技术是一个打包的 SIMD 架构。Neon 寄存器被视为相同数据类型的元素的向量,并且 Neon 指令可同时对多个元素进行操作。该技术支持多种数据类型,包括浮点和整数运算。

    Neon technology is intended to improve the multimedia user experience by accelerating audio and video encoding and decoding, user interface, 2D/3D graphics, and gaming. Neon can also accelerate signal processing algorithms and functions to speed up applications such as audio and video processing, voice and facial recognition, computer vision, and deep learning.
    Neon 技术旨在通过加速音频和视频的编码和解码,用户界面,2D/3D 图形和游戏来改善多媒体用户体验。Neon 还可以加速信号处理算法和功能,以加快诸如音频和视频处理,语音和面部识别,计算机视觉以及深度学习之类的应用程序。

    As a programmer, there are several ways you can use Neon technology:
    作为程序员,您可以通过多种方式使用 Neon 技术:

    1. Neon intrinsics
    2. Neon-enabled libraries
    3. Auto-vectorization by your compiler
    4. Hand-coded Neon assembler

    1. Neon Programmer Guides for Armv8-A

    Introducing Neon for Armv8-A
    https://developer.arm.com/architectures/instruction-sets/simd-isas/neon/neon-programmers-guide-for-armv8-a/introducing-neon-for-armv8-a

    Compiling for Neon with Auto-Vectorization
    https://developer.arm.com/architectures/instruction-sets/simd-isas/neon/neon-programmers-guide-for-armv8-a/compiling-for-neon-with-auto-vectorization

    Optimizing C Code with Neon Intrinsics
    https://developer.arm.com/architectures/instruction-sets/simd-isas/neon/neon-programmers-guide-for-armv8-a/optimizing-c-code-with-neon-intrinsics

    Neon Intrinsics Chromium Case Study
    https://developer.arm.com/architectures/instruction-sets/simd-isas/neon/neon-programmers-guide-for-armv8-a/neon-intrinsics-chromium-case-study

    Coding for Neon
    https://developer.arm.com/architectures/instruction-sets/simd-isas/neon/neon-programmers-guide-for-armv8-a/coding-for-neon

    2. Neon Intrinsics Reference

    Neon Intrinsics
    https://developer.arm.com/architectures/instruction-sets/simd-isas/neon/intrinsics

    3. Using Neon Intrinsics on Android

    Neon Intrinsics - Getting Started on Android
    https://developer.arm.com/solutions/os/android/developer-guides/neon-intrinsics-getting-started-on-android

    How to Truncate Thresholding and Convolution of a 1D Signal?
    https://developer.arm.com/solutions/os/android/developer-guides/neon-intrinsics-on-android-how-to-truncate-thresholding-and-convolution-of-a-1d-signal

    4. Neon-enabled libraries

    Arm Compute Library
    https://developer.arm.com/ip-products/processors/machine-learning/compute-library

    Ne10
    https://projectne10.github.io/Ne10/

    Libyuv

    Skia

    5. Auto-vectorization

    Auto-vectorization is the process by which a compiler can automatically analyze your code and identify opportunities to optimize performance with Neon. Compilers that can perform auto-vectorization include Arm Compiler, LLVM or Clang, and GCC.
    可以执行自动矢量化的编译器包括 Arm Compiler, LLVM or Clang, and GCC。

    Compiling for Neon with Auto-Vectorization
    https://developer.arm.com/architectures/instruction-sets/simd-isas/neon/neon-programmers-guide-for-armv8-a/compiling-for-neon-with-auto-vectorization

    Arm Compiler 6 Documentation
    https://developer.arm.com/tools-and-software/embedded/arm-compiler/documentation

    Auto-Vectorization in LLVM
    https://llvm.org/docs/Vectorizers.html

    GCC online documentation
    https://gcc.gnu.org/onlinedocs/

    6. Neon assembly code

    Neon assembler
    For very high performance, hand-coded Neon assembler can be the best approach for experienced programmers. Both the GNU assembler and the Arm Compiler toolchain assembler support assembly of Neon instructions.
    手工编码的 Neon 汇编程序。GNU 汇编器和 Arm Compiler 工具链汇编器都支持 Neon 指令的汇编。

    Arm Architecture Reference Manual Armv8, for Armv8-A architecture profile
    https://developer.arm.com/documentation/ddi0487/latest

    Software Optimization Guides
    https://developer.arm.com/search#q=software%20optimization%20guide

    Coding for Neon
    https://developer.arm.com/architectures/instruction-sets/simd-isas/neon/neon-programmers-guide-for-armv8-a/coding-for-neon

    Coding for Neon - Part 4: Shifting Left and Right
    https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/coding-for-neon—part-4-shifting-left-and-right

    Coding for Neon - Part 5: Rearranging Vectors
    https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/coding-for-neon—part-5-rearranging-vectors

    7. Arm tools for Neon

    Arm Development Studio
    https://developer.arm.com/tools-and-software/embedded/arm-development-studio

    Arm Mobile Studio
    https://developer.arm.com/tools-and-software/graphics-and-gaming/arm-mobile-studio

    Arm Compiler
    https://developer.arm.com/tools-and-software/embedded/arm-compiler

    8. Resources

    https://developer.arm.com/architectures/instruction-sets/simd-isas/neon
    https://neon-lang.dev/
    https://static.docs.arm.com/den0018/a/DEN0018A_neon_programmers_guide_en.pdf

    展开全文
  • ARM NEON 优化

    千次阅读 2017-02-16 11:09:35
    ARM NEON 优化
  • ARM NEON 查找手册,可以查找neon内建函数的功能以及入参和返回值类型; RVCT 提供在 ARM 和 Thumb 状态下为 Cortex-A8 处理器生成 NEON 代码的内在 函数。 NEON 内在函数在头文件 arm_neon.h 中定义。头文件既...