精华内容
下载资源
问答
  • zynq dma手册.zip

    2021-01-14 16:17:34
    zynq axi总线dma手册
  • ZYNQ DMA测试

    2020-01-19 17:26:49
    参考文章: ZYNQ基础系列(七) LWIP数据通路 -- PL数据->PS->PC机(TCP) ... ZYNQ基础系列(六) DMA基本用法 https://blog.csdn.net/long_fly/article/details/79702222 ...

    参考文章:

    ZYNQ基础系列(七) LWIP数据通路 -- PL数据->PS->PC机(TCP) https://blog.csdn.net/long_fly/article/details/79760956

    ZYNQ基础系列(六) DMA基本用法  https://blog.csdn.net/long_fly/article/details/79702222

    展开全文
  • ZYNQ AXI DMA 学习总结

    千次阅读 2019-02-21 09:32:06
    【原创】ZYNQ AXIDMA详解(一) ZYNQ基础系列(六) DMA基本用法 ZYNQ基础系列(六) DMA基本用法 zedboard axiDMA linux驱动 zynq linux AXI DMA传输步骤教程详解 【ZYNQ-7000——开发之五】:AXI DMA读写FIFO...
    展开全文
  • ZYNQ AXI DMA

    2020-08-22 23:45:38
    AXIDMA: 官方解释是为内存与AXI4-...在ZYNQ中,AXIDMA就是FPGA访问DDR3的桥梁,不过该过程受ARM的监控和管理。使用其他的IP(也是AXI4-Stream转AXI4-MM)可以不需要ARM管理,但是在SOC开发时是危险的,这是后话了。 h

    AXIDMA: 官方解释是为内存与AXI4-Stream外设之间提供高带宽的直接存储访问。AXI DMA主要包括Memory Map和 Stream两部分接口,前者连接PS子系统,后者则连接带有流接口的PL IP核。
    在这里插入图片描述

    其可选的scatter/gather功能可将CPU从数据搬移任务中解放出来。在ZYNQ中,AXIDMA就是FPGA访问DDR3的桥梁,不过该过程受ARM的监控和管理。使用其他的IP(也是AXI4-Stream转AXI4-MM)可以不需要ARM管理,但是在SOC开发时是危险的,这是后话了。

    https://www.cnblogs.com/batianhu/p/zynq_axidma_xiangjie1.html

    展开全文
  • zynq DMA分析

    千次阅读 2014-12-08 21:40:30
    Zynq Linux pl330 DMA Edit 0 10 … 3 Tags DMAlinuxZynq  Notify RSS Backlinks Source Print Export (PDF) IMPORTANT NOTE: The reference implementation contained ...
    
    

    IMPORTANT NOTE: The reference implementation contained on this page is no longer up-to-date and is kept as a reference for design teams working with older kernels.


    If you are working with a kernel newer than 3.6 (corresponding to the Xilinx-v14.4 tag on GitHub), the DMA330 driver is obsolete. The hardware DMA components in the Zynq device are controlled through the standard Linux DMA API.



    The Zynq-7000 family processor block includes an eight-channel PL330 DMA controller that you can use to significantly improve throughput between your custom hardware peripherals and external memory. Xilinx provides a Linux driver for the PL330 DMA controller itself, but in order to use it in your applications you will need to write custom software drivers to configure it for your application. This page hosts a simple example driver that illustrates DMA-based transfers between the Linux user space and a FIFO-based AXI interface similar to the Xilinx AXI Streaming FIFO (axi_mm2s_fifo).

    Using the PL330 DMA Driver

    The Linux PL330 DMA API is modeled on the ISA DMA API and performs DMA transfers betwen a device and memory, i.e. a fixed address an a memory region. Configuration for the various parameters of the DMA transaction, such as source and destination burst size, burst length, protection control, etc. are passed through exported functions provided by the driver. The driver will construct PL330 DMA programs and pass control to the PL330 itself to execute the programs.

    You need to set up the AXI bus transaction configurations for both the target and destination sides of the DMA transfer. You pass these settings via the structs pl330_client_data and the function set_pl330_client_data, both of which are defined in arch/arm/mach-zynq/include/mach/pl330.h.

    The driver has interrupt service routines for both the DMA done interrupt and DMA fault interrupt. You can pass your own callbacks for these interrupts to the driver using the set_pl330_done_callback and set_pl330_fault_callback functions.

    Here is a simple example of how to start a DMA transaction:
    struct pl330_client_data client_data = {
         .dev_addr = my_device_addr,
         .dev_bus_des = {
             .burst_size = 4,
             .burst_len = 4,
         },
         .mem_bus_des = {
             .burst_size = 4,
             .burst_len = 4,
         },
     };
     
    status = request_dma(channel, DRIVER_NAME);
     
    if (status != 0)
        goto failed;
     
    set_dma_mode(channel, DMA_MODE_READ);
     
    set_dma_addr(channel, buf_bus_addr);
     
    set_dma_count(channel, num_of_bytes);
     
    set_pl330_client_data(channel, &client_data);
     
    set_pl330_done_callback(channel, my_done_callback, my_dev);
     
    set_pl330_fault_callback(channel, my_fault_callback2, my_dev);
     
    enable_dma(channel);

    Creating Custom Drivers Using PL330 DMA Functions

    Of course, the above function calls must be made in a kernel context with allocated DMA buffers, etc. which requires that you write a custom driver for your hardware. In the example below, we've put together a driver for a generic FIFO-based system. This is a very simple example, performing only blocking writes to a FIFO interface modeled on the AXI MM2S FIFO core (or other similar generic FIFO).

    Setting up the Build Environment

    This step requires the ARM GNU tools, which are part of Xilinx SDK, to be installed on your host system. Specify the ARM cross-compiler by setting the CROSS_COMPILE environment variable and adding the cross-compiler to your PATH.
    bash> export CROSS_COMPILE=arm-xilinx-linux-gnueabi-
    bash> export PATH=/path/to/cross/compiler/bin:$PATH

    Creating a Makefile

    Linux drivers can either be compiled into the kernel at build time or compiled separately as loadable kernel modules. When developing a device driver, it's often advantageous to compile it separately to shorten the build process and allow you to dynamically load and unload the module.

    If you want to build the kernel module outside of the Linux source tree, you'll need to create a makefile that links into the kernel build mechanism.
    # Cross compiler makefile for FIFO DMA example
    KERN_SRC=/path/to/kernel/source
    obj-m := xfifo_dma.o
     
    all:
        make -C $(KERN_SRC) ARCH=arm M=`pwd` modules
    clean:
        make -C $(KERN_SRC) ARCH=arm M=`pwd=` clean

    Updating the DTS File

    After building a Linux kernel module, the kernel needs to have a way to associate it with a particular hardware device in your system. If you're doing development that you know is only going to target one particular hardware platform you could, of course, hard-code things like device addresses into the driver itself. However, it's generally considered bad practice and it's preferable to register your module as a platform device driver. On Linux for Xilinx devices, most of this is accomplished via Open Firmware using a device tree (DTS) file.

    In order for your driver to read information from this file you'll need to register your device as a platform device with a corresponding probe function (explained in more detail later) and also add a hardware instance to your DTS file.
        fifo_dma0: fifo_dma@78000000 {
            compatible = "xlnx,fifo-dma";
            reg = <0x78000000 0x2000>;
            fifo-depth = <2048>;
            dma-channel = <1>;
            burst-length = <4>;
        };

    Building the Driver

    Place the driver source file into the same directory as your makefile, and run make to compile the driver. Assuming there are no errors in the build process, you'll wind up with a file called fifo_dma.ko which is a loadable kernel object.
    bash> make

    Transfer your Kernel Module to the Target Platform

    The Linux kernel module tools insmod and rmmod expect the source modules to be placed into a specific location that doesn't exist by default in the Zynq ramdisk8M.image.gz root file system. Once your system is booted, you'll need to create a modules directory to hold your kernel object.
    zynq> mkdir -p /lib/modules/`uname -r`
    zynq> ln -s /lib/modules/`uname -r` /lib/modules/3.3
    After creating the required directory structure and the symbolic link for ease of use, upload the xfifo_dma.ko kernel module to /lib/modules/3.3 (if using FTP to transfer the file, be sure that your FTP client is in binary mode).

    Load the Kernel Module

    After transferring the kernel module to the board, you'll need to load it into memory.
    zynq> cd /lib/modules/3.3
    zynq> insmod xfifo_dma.ko
     
    We have 1 resources
    xfifo_dma 78000000.fifo_dma: read DMA channel is 1
    xfifo_dma 78000000.fifo_dma: DMA fifo depth is 2048
    xfifo_dma 78000000.fifo_dma: DMA burst length is 4
    devno is 0x3c00000, pdev id is 0
    xfifo_dma: mapped 0x78000000 to 0xf0074000
    xfifo_dma 78000000.fifo_dma: added Xilinx FIFO DMA successfully

    Create a Device Node

    Finally, before you can access the driver from userspace you'll need to create a device node under /dev to use for file operations.
    zynq> mknod /dev/fifo-dma0 c 60 0
    The driver is coded to request a major number of 60.

    Using the Driver

    Now that the kernel object has been loaded, you can access it as normal using file operations. Note that as with any DMA transaction, there is an additional period required to set up the DMA making it less efficient than processor-driven transfers for small blocks of data. For larger blocks, other system considerations such as memory bandwidth utilization, AXI bandwidth, or the bandwidth of your hardware peripherals will contribute much more heavily.
    zynq> dd if=/dev/urandom bs=1024 count=1 of=/dev/fifo-dma0
    dma buffer alloc - d @0x2e100000 v @0xffdf9000
    dma write 1024 bytes
    1+0 records in
    1+0 records out
    1024 bytes (1.0KB) copied, 0.006820 seconds, 146.6KB/s

    Driver Statistics

    As a final note, this driver keeps statistics that are available under /proc/driver/xfifo_dma.
    zynq> cat /proc/driver/xfifo_dma
     
    FIFO DMA Test:
     
    Device Physical Address: 0x78000000
    Device Virtual Address:  0xf0074000
    Device Address Space:    8192 bytes
    DMA Channel:             1
    FIFO Depth:              2048 bytes
    Burst Length:            4 words
     
    Opens:                   1
    Writes:                  1
    Bytes Written:           1024
    Closes:                  1
    Errors:                  0
    Busy:                    0

    FIFO DMA Test Driver Source

    /*
     * Driver for Linux DMA test application (FIFO)
     *
     * Copyright (C) 2012 Xilinx, Inc.
     * Copyright (C) 2012 Robert Armstrong
     *
     * Author: Robert Armstrong <robert.armstrong-jr@xilinx.com>
     *
     * This program is free software; you can redistribute it and/or modify
     * it under the terms of the GNU General Public License as published by
     * the Free Software Foundation; either version 2 of the License, or
     * (at your option) any later version.
     *
     * This program is distributed in the hope that it will be useful,
     * but WITHOUT ANY WARRANTY; without even the implied warranty of
     * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
     * GNU General Public License for more details.
     */
     
    #include <linux/kernel.h>
    #include <linux/init.h>
    #include <linux/module.h>
    #include <linux/moduleparam.h>
    #include <linux/platform_device.h>
    #include <linux/seq_file.h>
    #include <linux/proc_fs.h>
    #include <linux/err.h>
    #include <linux/slab.h>
    #include <linux/fs.h>
    #include <linux/cdev.h>
    #include <linux/dma-mapping.h>
    #include <linux/dmapool.h>
    #include <linux/mutex.h>
    #include <linux/sched.h>
    #include <linux/wait.h>
    #include <asm/uaccess.h>
    #include <asm/sizes.h>
    #include <asm/dma.h>
    #include <asm/io.h>
    #include <mach/pl330.h>
    #include <linux/of.h>
     
    /* Define debugging for use during our driver bringup */
    #undef PDEBUG
    #define PDEBUG(fmt, args...) printk(KERN_INFO fmt, ## args)
     
    /* Offsets for control registers in the AXI MM2S FIFO */
    #define AXI_TXFIFO_STS          0x0
    #define AXI_TXFIFO_RST          0x08
    #define AXI_TXFIFO_VAC          0x0c
    #define AXI_TXFIFO              0x10
    #define AXI_TXFIFO_LEN          0x14
     
    #define TXFIFO_STS_CLR          0xffffffff
    #define TXFIFO_RST              0x000000a5
     
    #define MODULE_NAME             "xfifo_dma"
    #define XFIFO_DMA_MINOR         0
     
    int xfifo_dma_major = 60;
    module_param(xfifo_dma_major, int, 0);
     
    dma_addr_t write_buffer;
     
    DECLARE_WAIT_QUEUE_HEAD(xfifo_dma_wait);
     
    struct xfifo_dma_dev {
            dev_t devno;
            struct mutex mutex;
            struct cdev cdev;
            struct platform_device *pdev;
     
            struct pl330_client_data *client_data;
     
            u32 dma_channel;
            u32 fifo_depth;
            u32 burst_length;
     
            /* Current DMA buffer information */
            dma_addr_t buffer_d_addr;
            void *buffer_v_addr;
            size_t count;
            int busy;
     
            /* Hardware device constants */
            u32 dev_physaddr;
            void *dev_virtaddr;
            u32 dev_addrsize;
     
            /* Driver reference counts */
            u32 writers;
     
            /* Driver statistics */
            u32 bytes_written;
            u32 writes;
            u32 opens;
            u32 closes;
            u32 errors;
    };
     
    struct xfifo_dma_dev *xfifo_dma_dev;
     
    static void xfifo_dma_reset_fifo(void)
    {
            iowrite32(TXFIFO_STS_CLR, xfifo_dma_dev->dev_virtaddr + AXI_TXFIFO_STS);
            iowrite32(TXFIFO_RST, xfifo_dma_dev->dev_virtaddr + AXI_TXFIFO_RST);
    }
     
    /* File operations */
    int xfifo_dma_open(struct inode *inode, struct file *filp)
    {
            struct xfifo_dma_dev *dev;
            int retval;
     
            retval = 0;
            dev = container_of(inode->i_cdev, struct xfifo_dma_dev, cdev);
            filp->private_data = dev;       /* For use elsewhere */
     
            if (mutex_lock_interruptible(&dev->mutex)) {
                    return -ERESTARTSYS;
            }
     
            /* We're only going to allow one write at a time, so manage that via
             * reference counts
             */
            switch (filp->f_flags & O_ACCMODE) {
            case O_RDONLY:
                    break;
            case O_WRONLY:
                    if (dev->writers || dev->busy) {
                            retval = -EBUSY;
                            goto out;
                    }
                    else {
                            dev->writers++;
                    }
                    break;
            case O_RDWR:
            default:
                    if (dev->writers || dev->busy) {
                            retval = -EBUSY;
                            goto out;
                    }
                    else {
                            dev->writers++;
                    }
            }
     
            dev->opens++;
     
    out:
            mutex_unlock(&dev->mutex);
            return retval;
    }
     
    int xfifo_dma_release(struct inode *inode, struct file *filp)
    {
            struct xfifo_dma_dev *dev = filp->private_data;
     
            if (mutex_lock_interruptible(&dev->mutex)) {
                    return -EINTR;
            }
     
            /* Manage writes via reference counts */
            switch (filp->f_flags & O_ACCMODE) {
            case O_RDONLY:
                    break;
            case O_WRONLY:
                    dev->writers--;
                    break;
            case O_RDWR:
            default:
                    dev->writers--;
            }
     
            dev->closes++;
     
            mutex_unlock(&dev->mutex);
     
            return 0;
    }
     
    ssize_t xfifo_dma_read(struct file *filp, char __user *buf, size_t count,
            loff_t *f_pos)
    {
            return 0;
    }
     
    static void xfifo_dma_fault_callback(unsigned int channel,
            unsigned int fault_type,
            unsigned int fault_address,
            void *data)
    {
            struct xfifo_dma_dev *dev = data;
     
            dev_err(&dev->pdev->dev,
                    "DMA fault type %d at address 0x%0x on channel %d\n",
                    fault_type, fault_address, channel);
     
            dev->errors++;
            xfifo_dma_reset_fifo();
            dev->busy = 0;
            wake_up_interruptible(&xfifo_dma_wait);
    }
     
    static void xfifo_dma_done_callback(unsigned int channel, void *data)
    {
            struct xfifo_dma_dev *dev = data;
     
            dev->bytes_written += dev->count;
            dev->busy = 0;
     
            /* Write the count to the FIFO control register */
            iowrite32(dev->count, xfifo_dma_dev->dev_virtaddr + AXI_TXFIFO_LEN);
     
            wake_up_interruptible(&xfifo_dma_wait);
    }
     
     
    ssize_t xfifo_dma_write(struct file *filp, const char __user *buf, size_t count,
            loff_t *f_pos)
    {
            struct xfifo_dma_dev *dev = filp->private_data;
            size_t transfer_size;
     
            int retval = 0;
     
            if (mutex_lock_interruptible(&dev->mutex)) {
                    return -EINTR;
            }
     
            dev->writes++;
     
            transfer_size = count;
            if (count > dev->fifo_depth) {
                    transfer_size = dev->fifo_depth;
            }
     
            /* Allocate a DMA buffer for the transfer */
            dev->buffer_v_addr = dma_alloc_coherent(&dev->pdev->dev, transfer_size,
                    &dev->buffer_d_addr, GFP_KERNEL);
            if (!dev->buffer_v_addr) {
                    dev_err(&dev->pdev->dev,
                            "coherent DMA buffer allocation failed\n");
                    retval = -ENOMEM;
                    goto fail_buffer;
            }
     
            PDEBUG("dma buffer alloc - d @0x%0x v @0x%0x\n",
                    (u32)dev->buffer_d_addr, (u32)dev->buffer_v_addr);
     
            if (request_dma(dev->dma_channel, MODULE_NAME)) {
                    dev_err(&dev->pdev->dev,
                            "unable to alloc DMA channel %d\n",
                            dev->dma_channel);
                    retval = -EBUSY;
                    goto fail_client_data;
            }
     
            dev->busy = 1;
            dev->count = transfer_size;
     
            set_dma_mode(dev->dma_channel, DMA_MODE_WRITE);
            set_dma_addr(dev->dma_channel, dev->buffer_d_addr);
            set_dma_count(dev->dma_channel, transfer_size);
            set_pl330_client_data(dev->dma_channel, dev->client_data);
            set_pl330_done_callback(dev->dma_channel,
                    xfifo_dma_done_callback, dev);
            set_pl330_fault_callback(dev->dma_channel,
                    xfifo_dma_fault_callback, dev);
            set_pl330_incr_dev_addr(dev->dma_channel, 0);
     
            /* Load our DMA buffer with the user data */
            copy_from_user(dev->buffer_v_addr, buf, transfer_size);
     
            xfifo_dma_reset_fifo();
            /* Kick off the DMA */
            enable_dma(dev->dma_channel);
     
            mutex_unlock(&dev->mutex);
     
            wait_event_interruptible(xfifo_dma_wait, dev->busy == 0);
     
            /* Deallocate the DMA buffer and free the channel */
            free_dma(dev->dma_channel);
     
            dma_free_coherent(&dev->pdev->dev, dev->count, dev->buffer_v_addr,
                    dev->buffer_d_addr);
     
            PDEBUG("dma write %d bytes\n", transfer_size);
     
            return transfer_size;
     
    fail_client_data:
            dma_free_coherent(&dev->pdev->dev, transfer_size, dev->buffer_v_addr,
                    dev->buffer_d_addr);
    fail_buffer:
            mutex_unlock(&dev->mutex);
            return retval;
    }
     
    struct file_operations xfifo_dma_fops = {
            .owner = THIS_MODULE,
            .read = xfifo_dma_read,
            .write = xfifo_dma_write,
            .open = xfifo_dma_open,
            .release = xfifo_dma_release
    };
     
    /* Driver /proc filesystem operations so that we can show some statistics */
    static void *xfifo_dma_proc_seq_start(struct seq_file *s, loff_t *pos)
    {
            if (*pos == 0) {
                    return xfifo_dma_dev;
            }
     
            return NULL;
    }
     
    static void *xfifo_dma_proc_seq_next(struct seq_file *s, void *v, loff_t *pos)
    {
            (*pos)++;
            return NULL;
    }
     
    static void xfifo_dma_proc_seq_stop(struct seq_file *s, void *v)
    {
    }
     
    static int xfifo_dma_proc_seq_show(struct seq_file *s, void *v)
    {
            struct xfifo_dma_dev *dev;
     
            dev = v;
            if (mutex_lock_interruptible(&dev->mutex)) {
                    return -EINTR;
            }
     
            seq_printf(s, "\nFIFO DMA Test:\n\n");
            seq_printf(s, "Device Physical Address: 0x%0x\n", dev->dev_physaddr);
            seq_printf(s, "Device Virtual Address:  0x%0x\n",
                    (u32)dev->dev_virtaddr);
            seq_printf(s, "Device Address Space:    %d bytes\n", dev->dev_addrsize);
            seq_printf(s, "DMA Channel:             %d\n", dev->dma_channel);
            seq_printf(s, "FIFO Depth:              %d bytes\n", dev->fifo_depth);
            seq_printf(s, "Burst Length:            %d words\n", dev->burst_length);
            seq_printf(s, "\n");
            seq_printf(s, "Opens:                   %d\n", dev->opens);
            seq_printf(s, "Writes:                  %d\n", dev->writes);
            seq_printf(s, "Bytes Written:           %d\n", dev->bytes_written);
            seq_printf(s, "Closes:                  %d\n", dev->closes);
            seq_printf(s, "Errors:                  %d\n", dev->errors);
            seq_printf(s, "Busy:                    %d\n", dev->busy);
            seq_printf(s, "\n");
     
            mutex_unlock(&dev->mutex);
            return 0;
    }
     
    /* SEQ operations for /proc */
    static struct seq_operations xfifo_dma_proc_seq_ops = {
            .start = xfifo_dma_proc_seq_start,
            .next = xfifo_dma_proc_seq_next,
            .stop = xfifo_dma_proc_seq_stop,
            .show = xfifo_dma_proc_seq_show
    };
     
    static int xfifo_dma_proc_open(struct inode *inode, struct file *file)
    {
            return seq_open(file, &xfifo_dma_proc_seq_ops);
    }
     
    static struct file_operations xfifo_dma_proc_ops = {
            .owner = THIS_MODULE,
            .open = xfifo_dma_proc_open,
            .read = seq_read,
            .llseek = seq_lseek,
            .release = seq_release
    };
     
    static int xfifo_dma_remove(struct platform_device *pdev)
    {
            cdev_del(&xfifo_dma_dev->cdev);
     
            remove_proc_entry("driver/xfifo_dma", NULL);
     
            unregister_chrdev_region(xfifo_dma_dev->devno, 1);
     
            /* Unmap the I/O memory */
            if (xfifo_dma_dev->dev_virtaddr) {
                    iounmap(xfifo_dma_dev->dev_virtaddr);
                    release_mem_region(xfifo_dma_dev->dev_physaddr,
                            xfifo_dma_dev->dev_addrsize);
            }
     
            /* Free the PL330 buffer client data descriptors */
            if (xfifo_dma_dev->client_data) {
                    kfree(xfifo_dma_dev->client_data);
            }
     
            if (xfifo_dma_dev) {
                    kfree(xfifo_dma_dev);
            }
     
            return 0;
    }
     
    #ifdef CONFIG_OF
    static struct of_device_id xfifodma_of_match[] __devinitdata = {
            { .compatible = "xlnx,fifo-dma", },
            { /* end of table */}
    };
    MODULE_DEVICE_TABLE(of, xfifodma_of_match);
    #else
    #define xfifodma_of_match NULL
    #endif /* CONFIG_OF */
     
    static int xfifo_dma_probe(struct platform_device *pdev)
    {
            int status;
            struct proc_dir_entry *proc_entry;
            struct resource *xfifo_dma_resource;
     
            /* Get our platform device resources */
            PDEBUG("We have %d resources\n", pdev->num_resources);
            xfifo_dma_resource = platform_get_resource(pdev, IORESOURCE_MEM, 0);
            if (xfifo_dma_resource == NULL) {
                    dev_err(&pdev->dev, "No resources found\n");
                    return -ENODEV;
            }
     
            /* Allocate a private structure to manage this device */
            xfifo_dma_dev = kmalloc(sizeof(struct xfifo_dma_dev), GFP_KERNEL);
            if (xfifo_dma_dev == NULL) {
                    dev_err(&pdev->dev,
                            "unable to allocate device structure\n");
                    return -ENOMEM;
            }
            memset(xfifo_dma_dev, 0, sizeof(struct xfifo_dma_dev));
     
            /* Get our device properties from the device tree, if they exist */
            if (pdev->dev.of_node) {
                    if (of_property_read_u32(pdev->dev.of_node, "dma-channel",
                            &xfifo_dma_dev->dma_channel) < 0) {
                            dev_warn(&pdev->dev,
                                    "DMA channel unspecified - assuming 0\n");
                            xfifo_dma_dev->dma_channel = 0;
                    }
                    dev_info(&pdev->dev,
                            "read DMA channel is %d\n", xfifo_dma_dev->dma_channel);
                    if (of_property_read_u32(pdev->dev.of_node, "fifo-depth",
                            &xfifo_dma_dev->fifo_depth) < 0) {
                            dev_warn(&pdev->dev,
                                    "depth unspecified, assuming 0xffffffff\n");
                            xfifo_dma_dev->fifo_depth = 0xffffffff;
                    }
                    dev_info(&pdev->dev,
                            "DMA fifo depth is %d\n", xfifo_dma_dev->fifo_depth);
                    if (of_property_read_u32(pdev->dev.of_node, "burst-length",
                            &xfifo_dma_dev->burst_length) < 0) {
                            dev_warn(&pdev->dev,
                                    "burst length unspecified - assuming 1\n");
                            xfifo_dma_dev->burst_length = 1;
                    }
                    dev_info(&pdev->dev,
                            "DMA burst length is %d\n",
                            xfifo_dma_dev->burst_length);
            }
     
            xfifo_dma_dev->pdev = pdev;
     
            xfifo_dma_dev->devno = MKDEV(xfifo_dma_major, XFIFO_DMA_MINOR);
            PDEBUG("devno is 0x%0x, pdev id is %d\n", xfifo_dma_dev->devno, XFIFO_DMA_MINOR);
     
            status = register_chrdev_region(xfifo_dma_dev->devno, 1, MODULE_NAME);
            if (status < 0) {
                    dev_err(&pdev->dev, "unable to register chrdev %d\n",
                            xfifo_dma_major);
                    goto fail;
            }
     
            /* Register with the kernel as a character device */
            cdev_init(&xfifo_dma_dev->cdev, &xfifo_dma_fops);
            xfifo_dma_dev->cdev.owner = THIS_MODULE;
            xfifo_dma_dev->cdev.ops = &xfifo_dma_fops;
     
            /* Initialize our device mutex */
            mutex_init(&xfifo_dma_dev->mutex);
     
            xfifo_dma_dev->dev_physaddr = xfifo_dma_resource->start;
            xfifo_dma_dev->dev_addrsize = xfifo_dma_resource->end -
                    xfifo_dma_resource->start + 1;
            if (!request_mem_region(xfifo_dma_dev->dev_physaddr,
                    xfifo_dma_dev->dev_addrsize, MODULE_NAME)) {
                    dev_err(&pdev->dev, "can't reserve i/o memory at 0x%08X\n",
                            xfifo_dma_dev->dev_physaddr);
                    status = -ENODEV;
                    goto fail;
            }
            xfifo_dma_dev->dev_virtaddr = ioremap(xfifo_dma_dev->dev_physaddr,
                    xfifo_dma_dev->dev_addrsize);
            PDEBUG("xfifo_dma: mapped 0x%0x to 0x%0x\n", xfifo_dma_dev->dev_physaddr,
                    (unsigned int)xfifo_dma_dev->dev_virtaddr);
     
            xfifo_dma_dev->client_data = kmalloc(sizeof(struct pl330_client_data),
                    GFP_KERNEL);
            if (!xfifo_dma_dev->client_data) {
                    dev_err(&pdev->dev, "can't allocate PL330 client data\n");
                    goto fail;
            }
            memset(xfifo_dma_dev->client_data, 0, sizeof(struct pl330_client_data));
     
            xfifo_dma_dev->client_data->dev_addr =
                    xfifo_dma_dev->dev_physaddr + AXI_TXFIFO;
            xfifo_dma_dev->client_data->dev_bus_des.burst_size = 4;
            xfifo_dma_dev->client_data->dev_bus_des.burst_len =
                    xfifo_dma_dev->burst_length;
            xfifo_dma_dev->client_data->mem_bus_des.burst_size = 4;
            xfifo_dma_dev->client_data->mem_bus_des.burst_len =
                    xfifo_dma_dev->burst_length;
     
            status = cdev_add(&xfifo_dma_dev->cdev, xfifo_dma_dev->devno, 1);
     
            /* Create statistics entry under /proc */
            proc_entry = create_proc_entry("driver/xfifo_dma", 0, NULL);
            if (proc_entry) {
                    proc_entry->proc_fops = &xfifo_dma_proc_ops;
            }
     
            xfifo_dma_reset_fifo();
            dev_info(&pdev->dev, "added Xilinx FIFO DMA successfully\n");
     
            return 0;
     
            fail:
            xfifo_dma_remove(pdev);
            return status;
    }
     
    static struct platform_driver xfifo_dma_driver = {
            .driver = {
                    .name = MODULE_NAME,
                    .owner = THIS_MODULE,
                    .of_match_table = xfifodma_of_match,
            },
            .probe = xfifo_dma_probe,
            .remove = xfifo_dma_remove,
    };
     
    static void __exit xfifo_dma_exit(void)
    {
            platform_driver_unregister(&xfifo_dma_driver);
    }
     
    static int __init xfifo_dma_init(void)
    {
            int status;
     
            status = platform_driver_register(&xfifo_dma_driver);
     
            return status;
    }
     
    module_init(xfifo_dma_init);
    module_exit(xfifo_dma_exit);
     
    MODULE_LICENSE("GPL");
    MODULE_DESCRIPTION("Xilinx FIFO DMA driver");
    MODULE_AUTHOR("Xilinx, Inc.");
    MODULE_VERSION("1.00a");
     
    展开全文
  • zynq DMA 裸机实例

    万次阅读 2016-05-25 19:33:51
    zynq7000 DMA 系列在PL和PS之间数据DMA数据传输有四种方式。 以下在PL端 (1) AXI Central DMA (2) AXI DMA Engine (3) AXI VedioEngine 和 (4) PL330 (PS端) 本节以AXI DMA engine为例,以裸机...
  • zynq DMA控制器

    2018-11-17 20:35:00
    Zynq-7000系列器件PS端的DMA控制器采用ARM的IP核DMA-330(PL-330)实现。 特点: 1.8个独立的通道,4个可用于PL—PS间数据管理,每个通道有1024Byte的MFIFO 2.使用CPU_2x 时钟搬运数据,CPU_2x= (CPU frq/6)*2 ...
  • ZYNQ DMA PL-330用户手册

    2018-05-21 23:18:56
    ZYNQ DMA PL-330用户手册 可用于ZYNQ7000 系列SoC 的DMA 开发参考使用,
  • ZYNQ DMA IP核使用手册

    2018-04-25 03:29:25
    学习ZYNQ必然会接触到DMA,本手册是Vivado DMA IP核的官方手册。
  • ZYNQ AXIDMA详解

    千次阅读 2018-12-26 18:27:12
    转载出处: https://www.cnblogs.com/batianhu/p/zynq_axidma_xiangjie1.html 一、基本概念 AXIDMA: 官方解释是为内存与AXI4-Stream外设之间提供...在ZYNQ中,AXIDMA就是FPGA访问DDR3的桥梁,不过该过程受ARM的...
  • zynq_DMA

    千次阅读 2016-07-20 15:31:20
    1、大致流程:FPGA-->产生一个DMA中断(pl中断),唤醒读数线程,告诉驱动有数据需要传输-->应用层调用驱动申请一个合适的DMA通道-->应用层调用read函数(在read函数中完成dev->dmamem的传输)读取DMA(dmamem)数据缓存的...
  • zynq axidma原理

    2021-02-18 19:48:11
    ZYNQ中,AXIDMA就是FPGA访问DDR3的桥梁,不过该过程受ARM的监控和管理。使用其他的IP(也是AXI4-Stream转AXI4-MM)可以不需要ARM管理,但是在SOC开发时是危险的,这是后话了。 如图1所示,AXIDMA IP有6个接口,S_...
  • zynq dma linux 配置

    2017-04-02 18:56:52
    该资源是博客中附带的资源下载链接
  • ZYNQ AXI DMA调试细节

    万次阅读 多人点赞 2019-03-10 14:31:23
    本文介绍ZYNQ AXI DMA的简单模式使用方法,查询模式(poll),不使用中断,32bit。 1.有关DMA的函数调用,去参照DMA的官方例程。所有的外设都是有ID的,先建立一个结构体,初始化外设,把外设的基地址赋值给结构体...
  • CC drivers/dma/xilinx/zynqmp_dma.o drivers/dma/xilinx/zynqmp_dma.c:166:4: warning: attribute 'aligned' is ignored, place it after "struct" to apply attribute to type declaration [-...
  • ZYNQ网卡DMA错误问题

    2020-05-31 14:15:40
    根据Zynq-7000参考手册(UG585),DMA不应访问0x00000000至0x0007ffff的地址范围(对于0x00000000至0x0003ffff的地址范围:地址由SCU过滤并且OCM映射为高)。 解决: 打开linux-xlnx-xilinx-v2016.4\arch\arm\ma
  • ZYNQDMA与AXI4总线 为什么在ZYNQDMA和AXI联系这么密切?通过上面的介绍我们知道ZYNQ中基本是以AXI总线完成相关功能的: 图4‑34连接 PS 和 PL 的 AXI 互联和接口的构架 在ZYNQ中,支持AXI-Lite,AXI4和AXI-...
  • ZYNQ. DMA基本用法

    2018-10-19 12:22:00
    zynq7 + dma +fifo sdk 中可以导入 demo demo 中 默认都是 一个字节8bit数据 的测试程序。 如果是其他长度的数据,不仅要修改数据长度 u16 *TxBufferPtr; u16 *RxBufferPtr; u16 Value; TxBufferPtr = (u16 *...
  • zynq audio pcm DMA

    千次阅读 2016-06-18 20:48:51
    接着zynq alsa说起  ​181 static int axi_i2s_probe(struct platform_device *pdev) 182 { 183 struct resource *res; 184 struct axi_i2s *i2s; ... 239 ret = devm_snd_dmaengine_pcm_register(&pde
  • ZYNQDMA基本用法

    2020-07-21 10:54:17
    涉及到高速数据传输时,DMA就显得非常重要了,本文的DMA主要是对PL侧的AXI DMA核进行介绍(不涉及PS侧的DMA控制器)。AXI DMA的用法基本是:PS通过AXI-lite向AXI DMA发送指令,AXI DMA通过HP通路和DDR交换数据,PL...
  • Zynq-Linux-DMA-master

    2018-01-23 17:25:32
    DMA enabled Zynq PS-PL communication to implement high throughput data transfer between Linux applications and user IP core. (based on Xilinx UG873 chapter 6) This is a simple loop-back project in ...
  • Zynq DMA 的简单介绍

    万次阅读 2016-09-08 16:04:15
    AXI Direct Memory Access (AXI DMA), 从名字我们知道为带AXI 总线的直接存储通道。其优点是通过PS端的简单配置,就实现PL和DDR3之间的快速存储。
  • 利用AXI DMA进行批量数据环路的测试背景软硬件平台原理概述工程搭建1.新建一个vivado工程2.创建block design①点击Create Block Design②点击上图中右侧的+,输入zynq,双击添加zynq核。③添加完后,双击zynq IP核,...
  • ZYNQ AXI DMA使用问题

    2019-10-08 20:36:26
    最近被AXI DMA给坑了一下 烦躁了几天 今天终于找到了原因。之前一直以为是AXI FIFO有BUG 而且是XILINX的BUG 老是出现DMA读完FIFO中数据之后程序卡死的情况,而且还会丢失FIFO中的数据现象。网上的大部分例程都是回环...
  • Zynq7020 DMA裸板测试

    2019-03-06 09:45:28
    最近做了有关DMA的裸机测试,在这记录一下我的测试方法。 首先需要将逻辑给做好,用到的是DMA和DATA FIFO,做成一个回路,这样做的目的是比较待会的数据读写是否一致,然后编译得到bit文件,进去SDK。 进到SDK...
  • ZYNQ-DMA控制器

    千次阅读 2018-01-31 17:18:23
    掌握DMA,才能掌握PS内高性能数据传输,以及PL内实现DMA传输。 DMA控制器为DMAC。在不需要CPU的基础上, DMAC可以移动大量数据,数据源和目的源存储器可以是PS或PL上的任何存储器资源,包括DDR、OCM、SPIflash、SMC...
  • 第八节,ZYNQDMA

    2019-04-16 21:03:31
    ZYNQDMA 1 DMA的特点和体系结构 DMA外设特点: DMA引擎拥有一个灵活的指令设置DMA的传输; 拥有8个cache线,每一个cache线宽度是4个字; 拥有8个可以并行的DMA通道线程; 拥有8个中断给中断控制器; 拥有8...

空空如也

空空如也

1 2 3 4 5 ... 18
收藏数 345
精华内容 138
关键字:

dmazynq