精华内容
下载资源
问答
  • K7_PCIE_DMA_XILINX

    2017-12-28 16:24:54
    xilinx 的pcie dma 开发资料,希望对工程师有参考做用,
  • CC drivers/dma/xilinx/zynqmp_dma.o drivers/dma/xilinx/zynqmp_dma.c:166:4: warning: attribute 'aligned' is ignored, place it after "struct" to apply attribute to type declaration [-...
  • xilinx_dma

    2017-07-27 22:07:38
    dma_performance_demo
  • PDMA的设计基于Xilinx V6 FPGA器件,支持PCIE2.0X4接口,目前已通过了大批量长时间数据传输测试。PDMA可广泛用于各类PCIE数据采集卡,存储卡等设备。PDMA可在PCIE2.0x4链路上实现读写双向450MByte/s以上的数据带宽。...
  • Xilinx DMA IP使用

    2020-07-30 10:30:20
    Xilinx共提供三种类型的DMA IP,AXI DMA,AXI CDMA,AXI VDMA,分别适配于AXI-MM,AXI-Stream等相互搬运场合。 AXI DMA 发送端通过Start of Frame bit (TXSOF)和End of Frame bit (TXEOF)来界定AXI-Stream上的包边界...

    作者

    QQ群:852283276
    微信:arm80x86
    微信公众号:青儿创客基地
    B站:主页 https://space.bilibili.com/208826118

    Xilinx共提供三种类型的DMA IP,AXI DMA,AXI CDMA,AXI VDMA,分别适配于AXI-MM,AXI-Stream等相互搬运场合。

    AXI DMA

    发送端通过Start of Frame bit (TXSOF)End of Frame bit (TXEOF)来界定AXI-Stream上的包边界。TXSOFTXEOF可以跨描述符,接收端也是类似,当包长度超过一个描述符长度时,会自动取下一个描述符来接收数据,通过RXSOFRXEOF来界定一个包。
    首先设置DMACR.RS为1,通过设置TAILDESC_PTR来指示开始发送或接收,可以动态修改TAILDESC_PTR寄存器,达到循环搬运的效果,类似于FIFO操作。

    SG描述符

    地址偏移必须是16个32位数对齐,即64字节。

    中断

    有三种类型的中断,

    • IOC_Irq:SG模式下,每一个描述符完成都会触发,中断太频繁导致包处理性能太低,coalesce寄存器可用来调节中断频率。
    • Dly_Irq:配合coalesce使用。
    • Err_Irq:错误中断。
    展开全文
  • xilinx dma调试笔记

    千次阅读 2019-01-17 17:06:47
    按照官方案例,启动接收传输: u32 Status = XAxiDma_SimpleTransfer(&AxiDma, (UINTPTR)RxDMAPtr, (u32)(1024), XAXIDMA_DEVICE_TO_DMA); if (Status != XST_SUCCESS) { prin...

    按照官方案例,启动接收传输:

            u32 Status = XAxiDma_SimpleTransfer(&AxiDma, (UINTPTR)RxDMAPtr,
                     (u32)(1024), XAXIDMA_DEVICE_TO_DMA);
            if (Status != XST_SUCCESS) {
                printf("dma from device error:%d\n", Status);
                pthread_exit(0);
            }

    发现接收到的中断总会进入IRQ_ERROR,打印IrqStatus的值

            IrqStatus = XAxiDma_IntrGetIrq(&AxiDma, XAXIDMA_DEVICE_TO_DMA);
            XAxiDma_IntrAckIrq(&AxiDma, IrqStatus, XAXIDMA_DEVICE_TO_DMA);
            if (!(IrqStatus & XAXIDMA_IRQ_ALL_MASK)) {
                printf("all mask = %x\n", IrqStatus);
                continue;
            }
            if ((IrqStatus & XAXIDMA_IRQ_ERROR_MASK)) {
                Error = 1;
                XAxiDma_Reset(&AxiDma);
                TimeOut = RESET_TIMEOUT_COUNTER;
                while (TimeOut) {
                    if(XAxiDma_ResetIsDone(&AxiDma)) {
                        break;
                    }
                    TimeOut -= 1;
                }
                printf("Error:0x%x dma from device %d \n", IrqStatus,TimeOut);
                continue;
            }
            if ((IrqStatus & XAXIDMA_IRQ_IOC_MASK)) {
                RxDone++;
                Packet_Rx_Length = XAxiDma_ReadReg(AxiDma.RxBdRing[0].ChanBase, XAXIDMA_BUFFLEN_OFFSET);
                printf("actural rx length = %ld\n", Packet_Rx_Length);
                sem_post(&c2h_sem);//trig to read
            }

    输出Error:0x5000 dma from device 10000

    XAXIDMA_IRQ_ERROR_MASK宏定义为0x004000

    XAXIDMA_IRQ_IOC_MASK宏定义为0x001000

    说明同时接收完毕,也产生了错误,官方例程对错误的处理是直接对dma进行复位,没有对错误进行分类;

     

    查看手册pg021_axi_dma,找到

    S2MM_DMASR (S2MM DMA Status Register – Offset 34h)

    This register provides the status for the Stream to Memory Map DMA Channel.

    对于Error的分类在低11位,然而读到IrqStatus低11位全是0,想必在读函数中做了MASK;果然找到读出后与0x007000相与,重写读IrqStatus;

            IrqStatus = XAxiDma_ReadReg(AxiDma.RegBase + (XAXIDMA_RX_OFFSET * XAXIDMA_DEVICE_TO_DMA), XAXIDMA_SR_OFFSET);

     

    打印输出IrqStatus值为0x5011,表示DMAIntErr和Halted为1;

    Halted位描述如下

    DMA Channel Halted. Indicates the run/stop state of the DMAchannel.

    • 0 = DMA channel running.

    • 1 = DMA channel halted. For Scatter/Gather Mode this bit gets set when DMACR.RS = 0 and DMA and SG operations have halted. For Direct Register Mode this bit gets set when DMACR.RS = 0 and DMA operations have halted. There can be a lag of time between when DMACR.RS = 0 and when DMASR.Halted = 1.

    Note: When halted (RS= 0 and Halted = 1), writing to TAILDESC_PTR pointer registers has no effect on DMA operations when in Scatter Gather Mode. For Direct Register Mode, writing to the LENGTH register has no effect on DMA operations.

    DMAIntErr位描述如下:

    DMA Internal Error. This error occurs if the buffer length specified in the fetched descriptor is set to 0. Also, when in Scatter Gather Mode and using the status app length field, this error occurs when the Status AXI4-Stream packet RxLength field does not match the S2MM packet being received by the S_AXIS_S2MM interface. When Scatter Gather is disabled, this error is flagged if any error occurs during Memory write or if the incoming packet is bigger than what is specified in the DMA length register.

    This error condition causes the AXI DMA to halt gracefully. The DMACR.RS bit is set to 0, and when the engine has completely shut down, the DMASR.Halted bit is set to 1.

    • 0 = No DMA Internal Errors.

    • 1 = DMA Internal Error detected.

    于是找到设置DMA length register:

    S2MM_LENGTH (S2MM DMA Buffer Length Register – Offset 58h)

    This register provides the length in bytes of the buffer to write data from the Stream to Memory map DMA transfer.

    S2MM Length描述为:

    Indicates the length in bytes of the S2MM buffer available to write receive data from the S2MM channel. Writing a non-zero value to this register enables S2MM channel to receive packet data.

    At the completion of the S2MM transfer, the number of actual bytes written on the S2MM AXI4 interface is updated to the S2MM_LENGTH register.

    Note: This value must be greater than or equal to the largest expected packet to be received on S2MM AXI4-Stream. Values smaller than the received packet result in undefined behavior.

    Notes:

    1. Width of Length field determined by Buffer Length Register Width parameter. Minimum width is 8 bits (7 to 0) and maximum width is 26 bits (25 to 0).

    问题是设置传输的S2MM buffer length太小,在传输结束后读取S2MM length可以读到实际传输的包大小;

    S2MM Length的位宽可在DMA IP核例化的时候设置。

     

    展开全文
  • XIlinx Vivado AXI DMA简介

    2020-12-07 15:05:30
    XIlinx Vivado AXI DMA 1 简介 AXI直接内存访问(AXI DMA)内核是一个软Xilinx IP核,可与Xilinx Vivado®设计套件配合使用。AXI DMA在内存和AXI4流目标外围设备之间提供高带宽直接内存访问。它可选的分散/收集功能...

    XIlinx Vivado AXI DMA

    1 简介

    AXI直接内存访问(AXI DMA)内核是一个软Xilinx IP核,可与Xilinx Vivado®设计套件配合使用。AXI DMA在内存和AXI4流目标外围设备之间提供高带宽直接内存访问。它可选的分散/收集功能也减轻了中央处理器(CPU)的数据移动任务。

    2 特征

    •符合AXI4标准
    •可选分散/收集直接存储器访问(DMA)支持
    •支持32、64、128、256512和1024位的AXI4数据宽度
    •AXI4流数据宽度支持8、16、32、64、128、256、512和1024位
    •可选锁孔支架
    •可选的数据重新对齐支持流数据宽度高达512位
    •可选的AXI控制和状态流
    •可选Micro DMA支持
    •支持最多64位寻址

    通过AXI4读主机到AXI4内存映射到流(MM2S)主机,并与AXI              流到内存映射(S2MM)从设备到AXI4写入主机。AXI DMA还支持              中的MM2S和S2MM路径上的16个多通道数据移动              分散/聚集模式。              MM2S信道和S2MM信道独立工作。AXI DMA提供4 KB              地址边界保护(在非Micro DMA中配置时),自动突发              映射,以及提供使用              AXI4流总线的全带宽功能。此外,AXI DMA              提供字节级数据重新排列,允许从任何字节开始进行内存读写              偏移位置。              MM2S通道支持用于向发送用户应用程序数据的AXI控制流              目标IP。对于S2MM信道,为接收用户提供AXI状态流              来自目标IP的应用程序数据。              可选的分散/聚集引擎从系统中获取和更新缓冲区描述符              内存通过AXI4分散收集读/写主机接口。
    在这里插入图片描述

    展开全文
  • xilinx平台dma分析

    千次阅读 2019-07-11 14:59:10
    linux内核皆使用dmaengine架构来管理dma,未了解的可通过以下链接熟悉下: https://blog.csdn.net/were0415/article/details/54095899 现以视频输出为例进一步分析: 1.FPGA分配了一个显存空间,如下设备树所示:...

    linux内核皆使用dmaengine架构来管理dma,未了解的可通过以下链接熟悉下:

    https://blog.csdn.net/were0415/article/details/54095899

    现以视频输出为例进一步分析:

    1.FPGA分配了一个显存空间,如下设备树所示:

    		VideoOut_1ch_v_frmbuf_rd_0: v_frmbuf_rd@b0000000 {
    			#dma-cells = <1>;
    			clock-names = "ap_clk";
    			clocks = <&clk 72>;
    			compatible = "xlnx,v-frmbuf-rd-2.1", "xlnx,axi-frmbuf-rd-v2.1";
    			interrupt-names = "interrupt";
    			interrupt-parent = <&gic>;
    			interrupts = <0 105 4>;
    			reg = <0x0 0xb0000000 0x0 0x10000>;
    			reset-gpios = <&gpio 81 1>;
    			xlnx,dma-addr-width = <64>;
    			xlnx,dma-align = <8>;
    			xlnx,max-height = <2160>;
    			xlnx,max-width = <3840>;
    			xlnx,pixels-per-clock = <1>;
    			xlnx,s-axi-ctrl-addr-width = <0x7>;
    			xlnx,s-axi-ctrl-data-width = <0x20>;
    			xlnx,vid-formats = "yuyv", "nv12", "nv16";
    			xlnx,video-width = <8>;
    		};

    在驱动中会注册DMA engine,调用函数of_dma_controller_register;

    static int xilinx_frmbuf_probe(struct platform_device *pdev)
    {
    	struct device_node *node = pdev->dev.of_node;
    	struct xilinx_frmbuf_device *xdev;
    	struct resource *io;
    	enum dma_transfer_direction dma_dir;
    	const struct of_device_id *match;
    	int err;
    	u32 i, j, align, ppc;
    	int hw_vid_fmt_cnt;
    	const char *vid_fmts[ARRAY_SIZE(xilinx_frmbuf_formats)];
    
    	xdev = devm_kzalloc(&pdev->dev, sizeof(*xdev), GFP_KERNEL);
    	if (!xdev)
    		return -ENOMEM;
    
    	xdev->dev = &pdev->dev;
    
    	match = of_match_node(xilinx_frmbuf_of_ids, node);
    	if (!match)
    		return -ENODEV;
    
    	xdev->cfg = match->data;
    
    	dma_dir = (enum dma_transfer_direction)xdev->cfg->direction;
    
    	xdev->rst_gpio = devm_gpiod_get(&pdev->dev, "reset",
    					GPIOD_OUT_HIGH);
    	if (IS_ERR(xdev->rst_gpio)) {
    		err = PTR_ERR(xdev->rst_gpio);
    		if (err == -EPROBE_DEFER)
    			dev_info(&pdev->dev,
    				 "Probe deferred due to GPIO reset defer\n");
    		else
    			dev_err(&pdev->dev,
    				"Unable to locate reset property in dt\n");
    		return err;
    	}
    
    	gpiod_set_value_cansleep(xdev->rst_gpio, 0x0);
    
    	io = platform_get_resource(pdev, IORESOURCE_MEM, 0);
    	xdev->regs = devm_ioremap_resource(&pdev->dev, io);
    	if (IS_ERR(xdev->regs))
    		return PTR_ERR(xdev->regs);
    
    	err = of_property_read_u32(node, "xlnx,max-height", &xdev->max_height);
    	if (err < 0) {
    		xdev->max_height = XILINX_FRMBUF_MAX_HEIGHT;
    	} else if (xdev->max_height > XILINX_FRMBUF_MAX_HEIGHT ||
    		   xdev->max_height < XILINX_FRMBUF_MIN_HEIGHT) {
    		dev_err(&pdev->dev, "Invalid height in dt");
    		return -EINVAL;
    	}
    
    	err = of_property_read_u32(node, "xlnx,max-width", &xdev->max_width);
    	if (err < 0) {
    		xdev->max_width = XILINX_FRMBUF_MAX_WIDTH;
    	} else if (xdev->max_width > XILINX_FRMBUF_MAX_WIDTH ||
    		   xdev->max_width < XILINX_FRMBUF_MIN_WIDTH) {
    		dev_err(&pdev->dev, "Invalid width in dt");
    		return -EINVAL;
    	}
    
    	/* Initialize the DMA engine */
    	if (xdev->cfg->flags & XILINX_PPC_PROP) {
    		err = of_property_read_u32(node, "xlnx,pixels-per-clock", &ppc);
    		if (err || (ppc != 1 && ppc != 2 && ppc != 4)) {
    			dev_err(&pdev->dev, "missing or invalid pixels per clock dts prop\n");
    			return err;
    		}
    
    		err = of_property_read_u32(node, "xlnx,dma-align", &align);
    		if (err)
    			align = ppc * XILINX_FRMBUF_ALIGN_MUL;
    
    		if (align < (ppc * XILINX_FRMBUF_ALIGN_MUL) ||
    		    ffs(align) != fls(align)) {
    			dev_err(&pdev->dev, "invalid dma align dts prop\n");
    			return -EINVAL;
    		}
    	} else {
    		align = 16;
    	}
    
    	xdev->common.copy_align = fls(align) - 1;
    	xdev->common.dev = &pdev->dev;
    
    	INIT_LIST_HEAD(&xdev->common.channels);
    	dma_cap_set(DMA_SLAVE, xdev->common.cap_mask);
    	dma_cap_set(DMA_PRIVATE, xdev->common.cap_mask);
    
    	/* Initialize the channels */
    	err = xilinx_frmbuf_chan_probe(xdev, node);
    	if (err < 0)
    		return err;
    
    	xdev->chan.direction = dma_dir;
    
    	if (xdev->chan.direction == DMA_DEV_TO_MEM) {
    		xdev->common.directions = BIT(DMA_DEV_TO_MEM);
    		dev_info(&pdev->dev, "Xilinx AXI frmbuf DMA_DEV_TO_MEM\n");
    	} else if (xdev->chan.direction == DMA_MEM_TO_DEV) {
    		xdev->common.directions = BIT(DMA_MEM_TO_DEV);
    		dev_info(&pdev->dev, "Xilinx AXI frmbuf DMA_MEM_TO_DEV\n");
    	} else {
    		xilinx_frmbuf_chan_remove(&xdev->chan);
    		return -EINVAL;
    	}
    
    	/* read supported video formats and update internal table */
    	hw_vid_fmt_cnt = of_property_count_strings(node, "xlnx,vid-formats");
    
    	err = of_property_read_string_array(node, "xlnx,vid-formats",
    					    vid_fmts, hw_vid_fmt_cnt);
    	if (err < 0) {
    		dev_err(&pdev->dev,
    			"Missing or invalid xlnx,vid-formats dts prop\n");
    		return err;
    	}
    
    	for (i = 0; i < hw_vid_fmt_cnt; i++) {
    		const char *vid_fmt_name = vid_fmts[i];
    
    		for (j = 0; j < ARRAY_SIZE(xilinx_frmbuf_formats); j++) {
    			const char *dts_name =
    				xilinx_frmbuf_formats[j].dts_name;
    
    			if (strcmp(vid_fmt_name, dts_name))
    				continue;
    
    			xdev->enabled_vid_fmts |=
    				xilinx_frmbuf_formats[j].fmt_bitmask;
    		}
    	}
    
    	/* Determine supported vid framework formats */
    	frmbuf_init_format_array(xdev);
    
    	xdev->common.device_alloc_chan_resources =
    				xilinx_frmbuf_alloc_chan_resources;
    	xdev->common.device_free_chan_resources =
    				xilinx_frmbuf_free_chan_resources;
    	xdev->common.device_prep_interleaved_dma =
    				xilinx_frmbuf_dma_prep_interleaved;
    	xdev->common.device_terminate_all = xilinx_frmbuf_terminate_all;
    	xdev->common.device_synchronize = xilinx_frmbuf_synchronize;
    	xdev->common.device_tx_status = xilinx_frmbuf_tx_status;
    	xdev->common.device_issue_pending = xilinx_frmbuf_issue_pending;
    
    	platform_set_drvdata(pdev, xdev);
    
    	/* Register the DMA engine with the core */
    	dma_async_device_register(&xdev->common);
    	err = of_dma_controller_register(node, of_dma_xilinx_xlate, xdev);
    
    
    	return 0;
    }
    static struct platform_driver xilinx_frmbuf_driver = {
    	.driver = {
    		.name = "xilinx-frmbuf",
    		.of_match_table = xilinx_frmbuf_of_ids,
    	},
    	.probe = xilinx_frmbuf_probe,
    	.remove = xilinx_frmbuf_remove,
    };
    
    module_platform_driver(xilinx_frmbuf_driver);

    2.DMA Engine API编程

    slave DMA用法包括以下的步骤: 
    1. 分配一个DMA slave通道; 
    2. 设置slave和controller特定的参数; 
    3. 获取一个传输描述符; 
    4. 提交传输描述符; 
    5. 发起等待的请求并等待回调通知。

    首先驱动就是先分配一个DMA slave通道; 

    		v_drm_dmaengine_drv111: drm-dmaengine-drv111 { 
    			compatible = "xlnx,pl-disp"; 
    			dmas = <&VideoOut_1ch_v_frmbuf_rd_0 0>; 
    			dma-names = "dma0"; 
    			xlnx,vformat = "YUYV"; /*大写*/
    			xlnx,bridge = <&VideoOut_1ch_v_tc_0>;
    			#address-cells = <1>;
    			#size-cells = <0>;		
    			dmaengine_lcd_port: port@0 { 
    				reg = <0>; 
    				lcd_dmaengine_crtc: endpoint { 
    					remote-endpoint = <&lcd_encoder>; 
    				}; 
    			}; 
    		};
    
    static int xlnx_pl_disp_probe(struct platform_device *pdev)
    {
    	struct device *dev = &pdev->dev;
    	struct device_node *vtc_node;
    	struct xlnx_pl_disp *xlnx_pl_disp;
    	int ret;
    	const char *vformat;
    	struct dma_chan *dma_chan;
    	struct xlnx_dma_chan *xlnx_dma_chan;
    
    	xlnx_pl_disp = devm_kzalloc(dev, sizeof(*xlnx_pl_disp), GFP_KERNEL);
    	if (!xlnx_pl_disp)
    		return -ENOMEM;
    
    	//请求分配dma通道
    	dma_chan = of_dma_request_slave_channel(dev->of_node, "dma0");
    	if (IS_ERR_OR_NULL(dma_chan)) {
    		dev_err(dev, "failed to request dma channel\n");
    		return PTR_ERR(dma_chan);
    	}
    
    	xlnx_dma_chan = devm_kzalloc(dev, sizeof(*xlnx_dma_chan), GFP_KERNEL);
    	if (!xlnx_dma_chan)
    		return -ENOMEM;
    
    	xlnx_dma_chan->dma_chan = dma_chan;
    	xlnx_pl_disp->chan = xlnx_dma_chan;
    	ret = of_property_read_string(dev->of_node, "xlnx,vformat", &vformat);
    	if (ret) {
    		dev_err(dev, "No xlnx,vformat value in dts\n");
    		goto err_dma;
    	}
    	
    	strcpy((char *)&xlnx_pl_disp->fmt, vformat);
    	printk("+++++++++++vformat: %s,  xlnx_pl_disp->fmt: 0x%x\n", vformat,xlnx_pl_disp->fmt);
    
    	/* VTC Bridge support */
    	vtc_node = of_parse_phandle(dev->of_node, "xlnx,bridge", 0);
    	printk("++++++++++++++++++++++++++vtc_node:%p\n", vtc_node);
    	if (vtc_node) {
    		xlnx_pl_disp->vtc_bridge = of_xlnx_bridge_get(vtc_node);
    		if (!xlnx_pl_disp->vtc_bridge) {
    			dev_info(dev, "Didn't get vtc bridge instance\n");
    			return -EPROBE_DEFER;
    		}
    	} else {
    		dev_info(dev, "vtc bridge property not present\n");
    	}
    
    	xlnx_pl_disp->dev = dev;
    	platform_set_drvdata(pdev, xlnx_pl_disp);
    
    	ret = component_add(dev, &xlnx_pl_disp_component_ops);
    	if (ret)
    		goto err_dma;
    
    	xlnx_pl_disp->master = xlnx_drm_pipeline_init(pdev);
    	if (IS_ERR(xlnx_pl_disp->master)) {
    		ret = PTR_ERR(xlnx_pl_disp->master);
    		dev_err(dev, "failed to initialize the drm pipeline\n");
    		goto err_component;
    	}
    
    	dev_info(&pdev->dev, "Xlnx PL display driver probed\n");
    
    	return 0;
    
    err_component:
    	component_del(dev, &xlnx_pl_disp_component_ops);
    err_dma:
    	dma_release_channel(xlnx_pl_disp->chan->dma_chan);
    
    	return ret;
    }
    

    2. 设置slave和controller特定的参数; 这个在上层mode set时候会调用到此处,配置参数;

    //确定缓冲区分配大小,如1280x720@YUYV,则1280*2*720
    static int xlnx_pl_disp_plane_mode_set(struct drm_plane *plane,
    				       struct drm_framebuffer *fb,
    				       int crtc_x, int crtc_y,
    				       unsigned int crtc_w, unsigned int crtc_h,
    				       u32 src_x, uint32_t src_y,
    				       u32 src_w, uint32_t src_h)
    {
    	printk("________________________________%s\n", __func__);
    	struct xlnx_pl_disp *xlnx_pl_disp = plane_to_dma(plane);
    	const struct drm_format_info *info = fb->format;
    	dma_addr_t luma_paddr, chroma_paddr;
    	size_t stride;
    	struct xlnx_dma_chan *xlnx_dma_chan = xlnx_pl_disp->chan;
    
    	if (info->num_planes > 2) {
    		dev_err(xlnx_pl_disp->dev, "Color format not supported\n");
    		return -EINVAL;
    	}
    	luma_paddr = drm_fb_cma_get_gem_addr(fb, plane->state, 0);
    	if (!luma_paddr) {
    		dev_err(xlnx_pl_disp->dev, "failed to get luma paddr\n");
    		return -EINVAL;
    	}
    	printk("____________________________luma_paddr = 0x%x\n", luma_paddr);
    
    	dev_dbg(xlnx_pl_disp->dev, "num planes = %d\n", info->num_planes);
    	xlnx_dma_chan->xt.numf = src_h;
    	xlnx_dma_chan->sgl[0].size = drm_format_plane_width_bytes(info,
    								  0, src_w);
    	xlnx_dma_chan->sgl[0].icg = fb->pitches[0] - xlnx_dma_chan->sgl[0].size;
    	xlnx_dma_chan->xt.src_start = luma_paddr;
    	xlnx_dma_chan->xt.frame_size = info->num_planes;
    	xlnx_dma_chan->xt.dir = DMA_MEM_TO_DEV;
    	xlnx_dma_chan->xt.src_sgl = true;
    	xlnx_dma_chan->xt.dst_sgl = false;
    
    	/* Do we have a video format aware dma channel?
    	 * so, modify descriptor accordingly. Hueristic test:
    	 * we have a multi-plane format but only one dma channel
    	 */
    	if (info->num_planes > 1) {
    		chroma_paddr = drm_fb_cma_get_gem_addr(fb, plane->state, 1);
    		if (!chroma_paddr) {
    			dev_err(xlnx_pl_disp->dev,
    				"failed to get chroma paddr\n");
    			return -EINVAL;
    		}
    		stride = xlnx_dma_chan->sgl[0].size +
    			xlnx_dma_chan->sgl[0].icg;
    		xlnx_dma_chan->sgl[0].src_icg = chroma_paddr -
    			xlnx_dma_chan->xt.src_start -
    			(xlnx_dma_chan->xt.numf * stride);
    	}
    
    	return 0;
    }
    
    static void xlnx_pl_disp_plane_atomic_update(struct drm_plane *plane,
    					     struct drm_plane_state *old_state)
    {
    	int ret;
    	struct xlnx_pl_disp *xlnx_pl_disp = plane_to_dma(plane);
    	printk("________________________________%s\n", __func__);
    
    	ret = xlnx_pl_disp_plane_mode_set(plane,
    					  plane->state->fb,
    					  plane->state->crtc_x,
    					  plane->state->crtc_y,
    					  plane->state->crtc_w,
    					  plane->state->crtc_h,
    					  plane->state->src_x >> 16,
    					  plane->state->src_y >> 16,
    					  plane->state->src_w >> 16,
    					  plane->state->src_h >> 16);
    	if (ret) {
    		dev_err(xlnx_pl_disp->dev, "failed to mode set a plane\n");
    		return;
    	}
    	/* in case frame buffer is used set the color format */
    	xilinx_xdma_drm_config(xlnx_pl_disp->chan->dma_chan,
    			       xlnx_pl_disp->plane.state->fb->format->format);
    	/* apply the new fb addr and enable */
    	xlnx_pl_disp_plane_enable(plane);
    }
    
    static const struct drm_plane_helper_funcs xlnx_pl_disp_plane_helper_funcs = {
    	.atomic_update = xlnx_pl_disp_plane_atomic_update,
    	.atomic_disable = xlnx_pl_disp_plane_atomic_disable,
    };

    3. 获取一个传输描述符; 
    4. 提交传输描述符; 
     

    
    /**
     * xlnx_pl_disp_plane_enable - Enables DRM plane
     * @plane: DRM plane object
     *
     * Enable the DRM plane, by enabling the corresponding DMA
     */
    static void xlnx_pl_disp_plane_enable(struct drm_plane *plane)
    {
    	struct xlnx_pl_disp *xlnx_pl_disp = plane_to_dma(plane);
    	struct dma_async_tx_descriptor *desc;
    	enum dma_ctrl_flags flags;
    	struct xlnx_dma_chan *xlnx_dma_chan = xlnx_pl_disp->chan;
    	struct dma_chan *dma_chan = xlnx_dma_chan->dma_chan;
    	struct dma_interleaved_template *xt = &xlnx_dma_chan->xt;
    	printk("________________________________%s\n", __func__);
    
    	flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT;
    	//获取dma描述符
    	desc = dmaengine_prep_interleaved_dma(dma_chan, xt, flags);
    	if (!desc) {
    		dev_err(xlnx_pl_disp->dev,
    			"failed to prepare DMA descriptor\n");
    		return;
    	}
    	desc->callback = xlnx_pl_disp->callback;
    	desc->callback_param = xlnx_pl_disp->callback_param;
    	xilinx_xdma_set_earlycb(xlnx_dma_chan->dma_chan, desc, true);
    
    	if (plane->state->fb->flags == DRM_MODE_FB_ALTERNATE_TOP ||
    	    plane->state->fb->flags == DRM_MODE_FB_ALTERNATE_BOTTOM) {
    		if (plane->state->fb->flags == DRM_MODE_FB_ALTERNATE_TOP)
    			xlnx_pl_disp->fid = 1;
    		else
    			xlnx_pl_disp->fid = 0;
    
    		xilinx_xdma_set_fid(xlnx_dma_chan->dma_chan, desc,
    				    xlnx_pl_disp->fid);
    	}
    
    	dmaengine_submit(desc);
    	dma_async_issue_pending(xlnx_dma_chan->dma_chan);
    }
    

    5. 发起等待的请求并等待回调通知。

    dma_async_issue_pending(xlnx_dma_chan->dma_chan);

    回调的就是之前显存注册那边的函数:

    xdev->common.device_issue_pending = xilinx_frmbuf_issue_pending;

    static void xilinx_frmbuf_issue_pending(struct dma_chan *dchan)
    {
    	struct xilinx_frmbuf_chan *chan = to_xilinx_chan(dchan);
    	unsigned long flags;
    
    	spin_lock_irqsave(&chan->lock, flags);
    	xilinx_frmbuf_start_transfer(chan);
    	spin_unlock_irqrestore(&chan->lock, flags);
    }
    
    /**
     * xilinx_frmbuf_start_transfer - Starts frmbuf transfer
     * @chan: Driver specific channel struct pointer
     */
    static void xilinx_frmbuf_start_transfer(struct xilinx_frmbuf_chan *chan)
    {
    	struct xilinx_frmbuf_tx_descriptor *desc;
    
    	if (!chan->idle)
    		return;
    
    	if (chan->staged_desc) {
    		chan->active_desc = chan->staged_desc;
    		chan->staged_desc = NULL;
    	}
    
    	if (list_empty(&chan->pending_list))
    		return;
    
    	desc = list_first_entry(&chan->pending_list,
    				struct xilinx_frmbuf_tx_descriptor,
    				node);
    	printk("xilinx_frmbuf_start_transfer:desc->hw.luma_plane_addr = 0x%x\n", desc->hw.luma_plane_addr);
    
    	/* Start the transfer */
    	chan->write_addr(chan, XILINX_FRMBUF_ADDR_OFFSET,
    			 desc->hw.luma_plane_addr);
    	chan->write_addr(chan, XILINX_FRMBUF_ADDR2_OFFSET,
    			 desc->hw.chroma_plane_addr);
    
    	/* HW expects these parameters to be same for one transaction */
    	frmbuf_write(chan, XILINX_FRMBUF_WIDTH_OFFSET, desc->hw.hsize);
    	frmbuf_write(chan, XILINX_FRMBUF_STRIDE_OFFSET, desc->hw.stride);
    	frmbuf_write(chan, XILINX_FRMBUF_HEIGHT_OFFSET, desc->hw.vsize);
    	frmbuf_write(chan, XILINX_FRMBUF_FMT_OFFSET, chan->vid_fmt->id);
    
    	/* If it is framebuffer read IP set the FID */
    	if (chan->direction == DMA_MEM_TO_DEV && chan->hw_fid)
    		frmbuf_write(chan, XILINX_FRMBUF_FID_OFFSET, desc->fid);
    
    	/* Start the hardware */
    	xilinx_frmbuf_start(chan);
    	list_del(&desc->node);
    
    	/* No staging descriptor required when auto restart is disabled */
    	if (chan->mode == AUTO_RESTART)
    		chan->staged_desc = desc;
    	else
    		chan->active_desc = desc;
    }
    

    这样dma通道就开启。

    看看应用层如何操作:

    
    static int drm_buffer_create(struct drm_device *drm_dev, unsigned int index)
    {
    	int i,ret;
    	struct drm_mode_create_dumb creq;
        struct drm_prime_handle prime;
    
    	struct drm_buffer *buf = &drm_dev->d_buff[index];
    	buf->index = index;
    
    	memset(&creq, 0, sizeof(creq));
    	creq.width = drm_dev->width;
    	creq.height = drm_dev->height;
    	creq.bpp = BYTES_PER_PIXEL * 8;
    	creq.flags = 0;
    
    	ret = drmIoctl(drm_dev->fd, DRM_IOCTL_MODE_CREATE_DUMB, &creq);
    	if (ret){
    		printf("create dumb failed!\n");
    		return -1;
    	}
    
    	uint32_t offsets[4]    = { 0, 0, 0, 0 };
    	uint32_t pitches[4]    = { 0, 0, 0, 0 };
    //	uint32_t bo_handles[4] = { 0, 0, 0, 0 };
    	uint32_t stride = creq.pitch;
    
    	printf("stride = %d\n", stride);
    
    	memset(&prime, 0, sizeof prime);
    	prime.handle = creq.handle;
    
    	/* Export GEM object to a FD */
    	ret = ioctl(drm_dev->fd, DRM_IOCTL_PRIME_HANDLE_TO_FD, &prime);
    	if (ret) {
    		printf("PRIME_HANDLE_TO_FD failed.\n");
    	   return -1;
    	}
    	//get buf info
    	i = 0;
    	buf->num_planes = 1;
    	buf->dmabuf_fd[i] = prime.fd;
    	buf->offsets[i]= 0;
    	buf->lengths[i]= stride * drm_dev->height;
    	buf->dumb_buff_length[i] = creq.size;
    
    	pitches[0] = stride;
    	offsets[0] = 0;
    	buf->bo_handle[0] = creq.handle;
    
    	//使用缓存的handel创建一个FB,返回fb的id:fb_handle。
    	ret = drmModeAddFB2(drm_dev->fd, drm_dev->width, drm_dev->height,  drm_dev->format, &buf->bo_handle[0], \
    								pitches, offsets, &buf->fb_handle, 0);
    	if (ret){
    		printf("failed to create fb\n");
    		return -1;
    	}
    	struct drm_mode_map_dumb mreq; //请求映射缓存到内存。
    
    
    	mreq.handle = creq.handle;
    	ret = drmIoctl(drm_dev->fd, DRM_IOCTL_MODE_MAP_DUMB, &mreq);
    	if (ret){
    		printf("map dumb failed!\n");
    	}
    	
    	// 猜测:创建的缓存位于显存上,在使用之前先使用drm_mode_map_dumb将其映射到内存空间。
    	// 但是映射后缓存位于内核内存空间,还需要一次mmap才能被程序使用。
    	buf->drm_buff[i] = mmap(0, creq.size, PROT_READ | PROT_WRITE, MAP_SHARED, drm_dev->fd, mreq.offset);
    	if (buf->drm_buff[i] == MAP_FAILED){
    		printf("mmap failed!\n");
    	}
    	printf("=====================================================================\n");
    
    	//一切准备完毕,只差连接在一起了!
    	ret = drmModeSetCrtc(drm_dev->fd, drm_dev->crtc_id, buf->fb_handle, 0, 0, &drm_dev->connector->connector_id, 1, drm_dev->connector->modes);
    	printf("ret = %d, drm_dev->connector->modes->clock = %d\n", ret,drm_dev->connector->modes->clock);
    //	ret = drmModeSetPlane(drm_dev->fd, drm_dev->plane_id, drm_dev->crtc_id, buf->fb_handle, 0, 0, 0,
    //			drm_dev->width, drm_dev->height,0, 0, drm_dev->width << 16, drm_dev->height << 16);
    
    	return 0;
    }
    

    通过drmModeSetCrtc函数可调用底层xlnx_pl_disp_plane_atomic_update(具体内容可以看上面源码)并一一对应了以上几部的DMA操作;这里输入fb_handle应该就是数据源;那目标就是显存;通过dmac传输。

    比如分配多个buf,则内核打印调试信息:

    ____________________________luma_paddr = 0x70500000

    ____________________________luma_paddr = 0x70300000

    ____________________________luma_paddr = 0x70100000

    展开全文
  • https://xilinx.github.io/embeddedsw.github.io/axidma/doc/html/api/index.html BD组成 Within the ring, the driver maintains four groups of BDs. Each group consists of 0 or more adjacent BDs: Free: The ...
  • xilinx DMA中断不响应

    2020-09-20 21:11:00
    本人习惯使用ADI公司开源的DMA IP核,最近有个项目使用ADI的ip将数据从PS发往PL时有点问题,发现PS的数据有时不更新,暂时没找到问题的原因,所以寻思直接采用xilinx的IP,但是使用之初发现无法实现数据搬移,最终...
  • Xilinx设计中,特别是7系列SOC设计,诸如ZYNQ系列,在FPGA与DDR交互时会用到VDMA、CDMA、ADMA等,此为其驱动部分代码
  • 基于Xilinx PCIe Core的DMA设计
  • Xilinx PCIE DMA 仿真环境搭建

    千次阅读 热门讨论 2019-06-03 22:55:44
    4、xapp1052DMA仿真 4.1 testcase 4.2 配置cfg_bus_mstr_enable 4.3 WR DMA仿真 4.4 RD DMA仿真 1、前言 在阅读本文之前,建议刚接触PCIE的读者,请按顺序逐一阅读下面几个内容: 五、X...
  • Xilinx官方AXI DMA技术文档,从事ZYNQ的DMA开发必备。
  • xilinx fpga PCIe IP核DMA传输参考代码。
  • xilinx_axidma-master.zip

    2019-05-16 12:57:13
    zynq板PS跟PL交互 DMA驱动源文件,petalinux工程移植DMA驱动源文件
  • 1、Xilinx PG021_AXI_DMA英文文档翻译。 2、AXI_DMA V7.1 LogiCORE IP Product GUide 3、提供三份文档:1、PG021官方英文文档;2、PG021 AXI DMA 中文翻译WORD版本;3、PG021 AXI DMA中文翻译PDF版本
  • 这是xilinx给出来的基于pcie总线的DMA参考设计,有涉及到pcie或者DMA项目的朋友可以参考一下
  • xilinx_vivado_sdk2018.2 学习例程: 1、DMA初始化 1)定义变量 //定义ioctrl的命令 #define AXI_ADC_IOCTL_BASE 'W' #define AXI_ADC_SET_SAMPLE_NUM _IO(AXI_ADC_IOCTL_BASE, 0) #define AXI_ADC_SET_...
  • 1024点FFT快速傅立叶变换 16位数据输入输出 带DMA功能 xilinx_VHDL代码 快速 , 资料
  • pcie dma 相关知识整理(xilinx平台)

    千次阅读 2019-11-28 10:59:44
    PCIE的DMA和PIO介绍 DMA数据传输方式 DMA(Direct Memory Access),直接内存访问,在该模式下,数据传送不是由CPU负责处理,而是由一个特殊的处理器DMA控制器来完成,因此占用极少的CPU资源。 DMA读过程 1、驱动...
  • <div><p>This is a rebase onto latest xilinx_dma driver of patches I have on my local tree. It includes: - Selected patches from #76 pull-request - Various fixes by me <p>I tested this with the test ...
  • #include #include #include #include #include #include #include #include #include #include #include #include ...#define DEVICE_NAME "dma_test" ...unsigned char dmatest
  • axidma的BD工程,亲测可以利用此工程在myir的开发板上进行AXIDMA测试,本资源主要是axidma的sg模式回环测试
  • 基于xilinx—pcie-core的DMA设计,是设计文档,源码索取可以邮件联系,chauncey_wu@163.com
  • 1024点FFT快速傅立叶变换,16位数据输入输出,带DMA功能,xilinx VHDL代码.zip
  • Xilinx PCIe DMA Linux驱动代码分析

    千次阅读 2018-11-02 14:00:24
  • Xilinx IP AXI DMA V7.1 -PG021英文文档翻译1、AXI DMA v7.1英文翻译2、英文文档翻译全文下载链接 1、AXI DMA v7.1英文翻译 2、英文文档翻译全文下载链接 链接: ...

空空如也

空空如也

1 2 3 4 5 ... 18
收藏数 356
精华内容 142
关键字:

dmaxilinx