精华内容
下载资源
问答
  • 2020-07-01 15:09:04

    生成大文件进行测试

       /**
         * 创建大文件
         * @throws IOException
         */
        public static void createBigFile() throws IOException {
            File file = new File("F:\\data\\big_file");
            FileWriter fileWriter = new FileWriter(file);
            BufferedWriter bufferedWriter = new BufferedWriter(fileWriter);
            String str = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa123,xxxxxxxxxxxxxxxxxx123";
            for (int i = 0; i < 10000000; i++) {
                bufferedWriter.write(str);
                bufferedWriter.newLine();
            }
            bufferedWriter.flush();
            bufferedWriter.close();
        }
    

    分次批量导入CSV文件

    分次持久化, 避免内存溢出
    注意: List list = new ArrayList<>(1024); 可以改成自己的业务实体, 设置属性

       /**
         * 导入文件
         * @param filePath 读取的文件路径
         * @param size 读取多少条持久化一次
         */
        public void importFile(String filePath, Integer size) throws IOException, ParseException {
            InputStreamReader isr = new InputStreamReader(new FileInputStream(filePath), "UTF-8");
            BufferedReader br = new BufferedReader(isr,5*1024*1024);// 用5M的缓冲读取文本文件
            List<String> list = new ArrayList<>(1024);
            int count = 0;
            String line = null;
            //跳过标题
            String title = br.readLine();
            //读取文本
            while ((line = br.readLine()) != null) {
                //一行数据
                String[] arr = line.split(",");
                //组装实体数据
                list.add(arr[0]);
                if (list.size() >= size){
                    count++;
                    //调用批量插入
                    //service.batchInsert(list);
                    log.info("数据列表长度: " + list.size());
                    log.info("持久化: " + count + "次");
                    list.clear();//清空list, 释放引用对象, 避免内存溢出
                }
            }
            if (list.size() > 0){
                count++;
                //调用批量插入
                //service.batchInsert(list);
                log.info("数据列表长度: " + list.size());
                log.info("持久化最后一次: " + count);
                //list.clear();
            }
        }
    

    文件拆分

    注意: 此拆分不是等份的, 需要自己调整一下

    思路

    思路:给定带拆分数量,计算出每个文件的平均字节数,然后循环文件数进行每个文件的拆分。拆分第一个文件时,根据平均字节数往后取给定的大约行字节数的字节,然后循环字节判断是否为\r或者\n,如果字节为\r或者\n则代表到达行末尾,记录行尾字节位置。知道了开头字节位置与结束字节位置,就可以将此位置之间的数据生成子文件了。继续循环拆分下个文件,基于上个文件记录的结束字节位置继续计算当前文件的结束位置,直到到达拆分文件的数量或者大文件读取完毕。

       /**
         * 拆分大文件
         */
        public static void splitFile(String filePath, int fileCount) throws IOException {
            FileInputStream fis = new FileInputStream(filePath);
            FileChannel inputChannel = fis.getChannel();
            final long fileSize = inputChannel.size();
            long average = fileSize / fileCount;//平均值
            long bufferSize = 200; //缓存块大小,自行调整
            ByteBuffer byteBuffer = ByteBuffer.allocate(Integer.valueOf(bufferSize + "")); // 申请一个缓存区
            long startPosition = 0; //子文件开始位置
            long endPosition = average < bufferSize ? 0 : average - bufferSize;//子文件结束位置
            for (int i = 0; i < fileCount; i++) {
                if (i + 1 != fileCount) {
                    int read = inputChannel.read(byteBuffer, endPosition);// 读取数据
                    readW:
                    while (read != -1) {
                        byteBuffer.flip();//切换读模式
                        byte[] array = byteBuffer.array();
                        for (int j = 0; j < array.length; j++) {
                            byte b = array[j];
                            if (b == 10 || b == 13) { //判断\n\r
                                endPosition += j;
                                break readW;
                            }
                        }
                        endPosition += bufferSize;
                        byteBuffer.clear(); //重置缓存块指针
                        read = inputChannel.read(byteBuffer, endPosition);
                    }
                }else{
                    endPosition = fileSize; //最后一个文件直接指向文件末尾
                }
    
                FileOutputStream fos = new FileOutputStream(filePath + (i + 1));
                FileChannel outputChannel = fos.getChannel();
                inputChannel.transferTo(startPosition, endPosition - startPosition, outputChannel);//通道传输文件数据
                outputChannel.close();
                fos.close();
                startPosition = endPosition + 1;
                endPosition += average;
            }
            inputChannel.close();
            fis.close();
        }
    

    测试方法

    //    public static void main(String[] args) throws Exception {
    //        Scanner scanner = new Scanner(System.in);
    //        scanner.nextLine();
    //        long startTime = System.currentTimeMillis();
    //        splitFile("F:\\data\\big_file",5);
    //        long endTime = System.currentTimeMillis();
    //        System.out.println("耗费时间: " + (endTime - startTime) + " ms");
    //        scanner.nextLine();
    //    }
    
        public static void main(String[] args) throws IOException {
            createBigFile();
        }
    

    参考链接: https://blog.csdn.net/u013632755/article/details/80467324 感谢作者

    更多相关内容
  • 安装py2neo pip install py2neo ...Neo4j要求加载的csv文件必须放在Neo4j安装目录下的import文件夹中。 file:///路径(路径是使用\还是/由系统决定,但是file后面必须是///) # -*- coding: utf-8

    neo4j import

    指令官方说明

    neo4j-admin import  --database=neo4j --nodes=CMDB_APPNODE_INFO="D:\neo4j\neo4j-community-4.1.0-windows\neo4j-community-4.1.0\import\CMDB_APPNODE_INFO.csv" --nodes=CMDB_IPCONF_ATTR="D:\neo4j\neo4j-community-4.1.0-windows\neo4j-community-4.1.0\import\CMDB_IPCONF_ATTR.csv" --nodes=CMDB_SERVERDEV_INFO="D:\neo4j\neo4j-community-4.1.0-windows\neo4j-community-4.1.0\import\CMDB_SERVERDEV_INFO.csv" --relationships=HAS_APPNODE="D:\neo4j\neo4j-community-4.1.0-windows\neo4j-community-4.1.0\import\CMDB_APPNODE_IP_REL.csv" --relationships=HAS_SERVERDEV="D:\neo4j\neo4j-community-4.1.0-windows\neo4j-community-4.1.0\import\CMDB_SERVER_IP_REL.csv" --multiline-fields=true
    

    安装py2neo

    pip install py2neo
    

    创建数据库驱动对象

    graph = Graph('http://localhost:7474',username='你的用户名',password='你的密码')
    

    执行Cypher语句

    graph.run(Cypher语句)
    

    举例

    注意:

    • Neo4j要求加载的csv文件必须放在Neo4j安装目录下的import文件夹中。
    • file:///路径(路径是使用\还是/由系统决定,但是file后面必须是///)
    • csv要实现保存为utf-8(不带BOM编码)格式,以避免内容中出现中文显示乱码。

    添加点

    # -*- coding: utf-8 -*-
    """
    Created on Mon Jul 13 21:38:43 2020
    
    @author: Administrator
    """
    from py2neo import Graph,Node,Relationship
    from neo4j import GraphDatabase
    import csv
    import re
    
    def GetHeader(file):
        #file = r'C:\Users\Administrator\Desktop\neo4j数据库表\CMDB_APPNODE_INFO.csv'
        csv_reader = csv.reader(open(file))
        count = 0
        
        for row in csv_reader:
            if count == 0:
                header = row
                break
        
        Dict = {}
        for k in header:
            Dict[k] = 'line.'+k
            
        return Dict
    
    def addEdges(file, edge, edge_descript, graph):
        csv_reader = csv.reader(open(file))
        count = 0
        
        for row in csv_reader:
            if count == 0:# 获取表头,构建字典header_d
                header = row
                header_d = {}
                for k in header:
                    header_d[k] = ''
                    
                count = 1
                continue
            
            
            for i in range(len(header)):# 将数据录入header_d,再转换为str
                header_d[re.sub("\'",'', header[i])] = row[i]
            
            header = str(header_d)
            
            cypher_add_edge = r"""
                        MATCH (a:{0}),(b:{2})
                        WHERE a.ROW_ID={1} AND b.ROW_ID={3}
                        WITH a,b SKIP 0 LIMIT 50
                        MERGE (a)-[:{4} {5}]->(b)
                      """.format(edge[0], row[0], edge[1], row[1], edge_descript, header)
            print(cypher_add_edge)
              
            break
    
    # 文件地址
    #CMDB_APPNODE_INFO, CMDB_SERVERDEV_INFO, CMDB_IPCONF_ATTR
    #CMDB_APPNODE_IP_REL
    filename = r'CMDB_APPNODE_INFO'+'.csv'
    
    file = r'D:\neo4j\neo4j-community-4.1.0-windows\neo4j-community-4.1.0\import\{}'.format(filename)
    
    #连接neo4j数据库,输入地址、用户名、密码
    graph = Graph('http://localhost:7474',username='neo4j',password='neo4jhz')
    
    #创建结点
    # "example cypher statement from http://console.neo4j.org/"
    
    header_dict = GetHeader(file)
    header = str(header_dict)
    header = re.sub("\'",'',header)
    
     
    
    cypher_add_nodes = r"""
                        USING PERIODIC COMMIT 1000
                        LOAD CSV WITH HEADERS FROM "file:///{0}" AS line
                        CREATE ({1}:{1} {2})
                      """.format(filename, filename, header)
    
    
    create_index = r"""
                        CREATE INDEX ON :{} (ROW_ID)
                    """.format(filename)
    
    
    query = """
            MATCH(n) RETURN(n) LIMIT 10
        """
    
    print(cypher_add_nodes)
    #graph.run(cypher_add_nodes)
    

    添加边

    # -*- coding: utf-8 -*-
    """
    Created on Wed Jul 15 11:31:06 2020
    
    @author: Administrator
    """
    
    
    from py2neo import Graph,Node,Relationship
    from neo4j import GraphDatabase
    import csv
    import re
    
    def createAttr(Dict):
        s = '{'
        for key, value in Dict.items():
            s = s + key + ':' + '\''+ value + '\''  + ','
        
        s = s[:-1] + '}'
        return s
            
    
    
    def addEdges(file, edge, edge_descript, graph):
        csv_reader = csv.reader(open(file))
        count = 0
        n = 0
        #header_d = {}
        
        L = 0
        for row in csv_reader:
            L+=1
        
        print('执行!{}'.format(L))
        
        #csv_reader只能使用一次,再次读取需要重新生成csv_reader对象
        csv_reader = csv.reader(open(file))
        
        for row in csv_reader:
            if count == 0:# 获取表头,构建字典header_d
                header_row = row
                n = len(header_row)
                
                count = 1
                continue
            
            header_d = {}
            for i in range(n):# 将数据录入header_d,再转换为str
                header_d[header_row[i]] = row[i]
            
            header = createAttr(header_d)
    
            #print(header)
            
            cypher_add_edge1 = r"""
                        MATCH (a:{0}),(b:{2})
                        WHERE a.ROW_ID='{1}' AND b.ROW_ID='{3}'
                        WITH a,b SKIP 0 LIMIT 50
                        MERGE (a)-[:{4} {5}]->(b)
                      """.format(edge[0], row[0], edge[1], row[1], edge_descript, header)
                      
            cypher_add_edge2 = r"""
                        MATCH (a:{0}),(b:{2})
                        WHERE a.ROW_ID='{1}' AND b.ROW_ID='{3}'
                        MERGE (a)-[:{4} {5}]->(b)
                      """.format(edge[0], row[0], edge[1], row[1], edge_descript, header)
                      
            #print(cypher_add_edge1)
            #print('执行!')
            #break
            graph.run(cypher_add_edge1)
            
            count += 1
            if count % (L//10) == 0:
                print('#', end='')
           #break
    
    
    # 文件地址
    #CMDB_APPNODE_INFO, CMDB_SERVERDEV_INFO, CMDB_IPCONF_ATTR
    #CMDB_APPNODE_IP_REL, CMDB_SERVER_IP_REL
    filename = r'CMDB_SERVER_IP_REL'+'.csv'
    edge = ['CMDB_IPCONF_ATTR', 'CMDB_SERVERDEV_INFO']
    edge_descript = 'HAS_SERVERDEV'#HAS_APPNODE
    #file = r'D:\\neo4j\\neo4j-community-4.1.0-windows\\neo4j-community-4.1.0\\import\\{}'.format(filename)
    
    #连接neo4j数据库,输入地址、用户名、密码
    graph = Graph('http://localhost:7474',username='neo4j',password='neo4jhz')
    
    f = "D:\\neo4j\\neo4j-community-4.1.0-windows\\neo4j-community-4.1.0\\import\\" + filename
    
    addEdges(f, edge, edge_descript, graph)
    

    从一个.xls文件中拆解出点和边

    # -*- coding: utf-8 -*-
    """
    Created on Sun Jul 19 15:16:59 2020
    
    @author: Administrator
    """
    import xlrd
    from py2neo import Graph
    
    def CreateServerAndDevice(path):
        #连接neo4j数据库,输入地址、用户名、密码
        graph = Graph('http://localhost:7474',username='neo4j',password='neo4jhz')
        
        workbook = xlrd.open_workbook(path)
        sheet = workbook.sheet_by_name('Sheet1')
        
        count = 0
        
        nrows = sheet.nrows
        
        for i in range(1,nrows):#去掉标题行
            count += 1
            if count % (nrows // 10) == 0:
                print('#', end='')
                
            serverip = sheet.cell_value(i,0)
            servermac = sheet.cell_value(i,1)
            
            devip = sheet.cell_value(i,2)
            devname = sheet.cell_value(i,3)
            port = sheet.cell_value(i,4)
            
            appname = sheet.cell_value(i,5)
            enappname = sheet.cell_value(i,6)
            
            create_server_dev_edges = """
                        CREATE (:DEVICE {{DEVIP: '{0}', DEVNAME: '{1}', port: '{2}'}})-[:LEARN_MAC]->(:SERVER {{SERVERIP: '{3}', SERVERMAC: '{4}'}})
                        """.format(devip, devname, port, serverip, servermac)
            
            
                        
            #print(create_server)
            #print(create_edge_server_exist_in)
            #print(create_edge_device_exist_in)
            #break
            graph.run(create_server_dev_edges)
            
    
    def AddEdges():
        #连接neo4j数据库,输入地址、用户名、密码
        graph = Graph('http://localhost:7474',username='neo4j',password='neo4jhz')
        
        create_edge_server_exist_in = """
                        MATCH (a:SERVER), (b:CMDB_IPCONF_ATTR)
                        WHERE a.SERVERIP = b.IP_ADDR
                        WITH a, b SKIP 0 LIMIT 50
                        MERGE (a)-[:SERVER_EXIST_IN]->(b)
                        """
        
        create_edge_device_exist_in = """
                        MATCH (a:DEVICE), (b:CMDB_IPCONF_ATTR)
                        WHERE a.DEVIP = b.IP_ADDR
                        WITH a, b SKIP 0 LIMIT 50
                        MERGE (a)-[:DEVICE_EXIST_IN]->(b)
                        """
        print(create_edge_server_exist_in)
        graph.run(create_edge_server_exist_in)
        print(create_edge_device_exist_in)
        graph.run(create_edge_device_exist_in)
    
    #path = r'C:\Users\Administrator\Desktop\工行实习\基于neo4j的网管监控告警影响及关键分析课题\neo4jdbTables\局域网连接数据.xls'
    #CreateServerAndDevice(path)
    AddEdges()
    

    导入告警信息

    # -*- coding: utf-8 -*-
    """
    Created on Wed Jul 22 16:04:34 2020
    
    @author: Administrator
    """
    import xlrd
    from py2neo import Graph
    
    
    def ReadEvents(path):
        #连接neo4j数据库,输入地址、用户名、密码
        graph = Graph('http://localhost:7474',username='neo4j',password='neo4jhz')
        
        workbook = xlrd.open_workbook(path)
        sheet = workbook.sheet_by_name('Sheet1')
        
        count = 0
        
        nrows = sheet.nrows
        
        for i in range(1,nrows):#去掉标题行
            count += 1
            if count % (nrows // 10) == 0:
                print('#', end='')
                
            device_name = sheet.cell_value(i,0)
            port = sheet.cell_value(i,1)
            
            event_time = sheet.cell_value(i,2)
            event = sheet.cell_value(i,3)
            
            cypher = """
                        MATCH (n {{DEVNAME:'{}', port:'{}'}})
                        SET n.event = '{}', n.event_time = '{}'
                        """.format(device_name, port, event, event_time)
            
            graph.run(cypher)
            #print(cypher)
            #break
    
    path = r'C:\Users\Administrator\Desktop\工行实习\cmdb最新表\events.xlsx'
    ReadEvents(path) 
    

    编译问题

    重启编译器即可

    数据问题

    将原始csv文件中有些数学计数法表示的数字修改单元格格式为数值型。否则大量不同数字会被当作相同的来记录。

    展开全文
  • easyExcel分批导入文件

    2021-04-25 21:46:43
    一些关于easyExcel导入文件操作 需求: 导入大数据量文件 其中数据达到万级、十万级, 错误文件进行错误单元格标红, 可导出修改完继续导入 由于数据量多大 一次行全部读到内存中可能会导致内存溢出问题 使用...

    一些关于easyExcel导入文件操作

    需求: 导入大数据量文件 其中数据达到万级、十万级, 错误文件进行错误单元格标红, 可导出修改完继续导入

    由于数据量多大 一次行全部读到内存中可能会导致内存溢出问题

    使用easyExcel poi的监听器进行操作

     

    三步曲:

    1、解析excel为inputStream流, 读取流,解析excel

    2、判断excel中每条数据的格式, 正确和错误相对记录

    3、通过监听器每解析150条数据, 进行入库操作, 错误数据存在内存中(考虑错误数据不多的情况)

    // 这里用到ossfs 反正就是读取excel为input流,
    涉及到两个系统之间流的传输, 这里直接把文件上传到oss
    try {
        in = new FileInputStream(localFileName);
    } catch (FileNotFoundException e) {
        in = HttpUtil.io(HttpUtil.Atom.builder().url(diseaseDto.getFileUrl()).build());
    }
    // 这里解析excel其中
    OltHosIcdDiseaseListener为自定义监听器
    try {
        LoggerUtil.info(LOGGER, "开始解析IcdDisease");
        OltHosIcdDiseaseListener oltHosIcdDiseaseListener = new OltHosIcdDiseaseListener(isCfgPrd, icdCodeList, delIcdCodeList, diseaseDto, oltConfigService, exportTaskHandler);
        excelReader = EasyExcel.read(in, oltHosIcdDiseaseListener).build();
        ReadSheet readSheet = EasyExcel.readSheet(0).build();
        excelReader.read(readSheet);
    } finally {
        try {
            if (in != null) {
                in.close();
            }
            if (excelReader != null) {
                // 这里千万别忘记关闭,读的时候会创建临时文件,到时磁盘会崩的
                excelReader.finish();
            }
        } catch (Exception e) {
            LoggerUtil.error(LOGGER, "{0},{1}", e.getMessage(), e);
        }
    }
    // 通过构造函数, 初始化一些list与对象
    // 这个是导入的核心方法, 所有的导入逻辑, 判断逻辑与入库都在这里操作
    // 采用无对象方式
    @Slf4j
    public class OltHosIcdDiseaseListener extends AnalysisEventListener<Map<Integer, String>> {
        private OltConfigService oltConfigService;
        private ExportTaskHandler exportTaskHandler;
        private static final int batchCount = 150;
        private int countNum = 0;
        private boolean isCfgPrd;
        private int successCount = 0;
        private int errorCount = 0;
    
        private List<String> checkRepeatCode = new ArrayList<>();
        private List<String> icdCodeList;
        private List<String> delIcdCodeList;
        private OltHosIcdDiseaseDto diseaseDto;
        private List<OltHosIcdDiseaseDto> successList = new ArrayList<>();
        private List<OltHosIcdDiseaseDto> errorList = new ArrayList<>();
        private List<OltHosIcdDiseaseDto> tempErrorList = new ArrayList<>();
    
    
        public OltHosIcdDiseaseListener(boolean isCfgPrd, List<String> icdCodeList, List<String> delIcdCodeList, OltHosIcdDiseaseDto diseaseDto,
                                        OltConfigService oltConfigService, ExportTaskHandler exportTaskHandler) {
            this.isCfgPrd = isCfgPrd;
            this.icdCodeList = icdCodeList;
            this.delIcdCodeList = delIcdCodeList;
            this.diseaseDto = diseaseDto;
            this.oltConfigService = oltConfigService;
            this.exportTaskHandler = exportTaskHandler;
        }
    
    /**
      * 这个每一条数据解析都会来调用
      * data --> 实体类
      * analysisContext excel信息
     */
    @Override
    public void invoke(Map<Integer, String> data, AnalysisContext context) {
        int rouNumber = context.readRowHolder().getRowIndex() + 1;
    
        // 这里是因为表头在第二行
        if (rouNumber == 2) {
            // 这里是校验表头
            checkExcelHead(data);
        } else if (rouNumber > 2) {
            // 这里是校验数据
            checkReadData(data);
        }
    
        // 超过150条就先入库
        if (countNum >= batchCount) {
            // 处理excel导出的正确数据
            batchOperateData();
        }
        countNum++;
    }
    
    /**
     * @author songhc
     * @create 
     * @desc 调用完成监听, 确保数据已全部处理完
     **/
    @Override
    public void doAfterAllAnalysed(AnalysisContext analysisContext) {
        // 这里也要保存数据,确保最后遗留的数据也存储到数据库
        Map<String, Object> objMap = new HashMap<>();
        // 处理excel导出的正确数据
        batchOperateData();
    
        // 错误数据填充oss表格
        Object object = uploadErrorData(errorList, diseaseDto);
    
        objMap.put("errorInfo", object);
        objMap.put("successCount", successCount);
        objMap.put("errorCount", errorCount);
    
        // 错误数据记录redis, 下次使用
        RedisStringHandler.set(String.format(RedisKeyConstants.EXPORT_ERROR_RESULT, "disease" + diseaseDto.getUserId() + "_" + diseaseDto.getRgId() + "_" + diseaseDto.getHosId()), JSONObject.toJSONString(objMap));
    }
    
    // 这里是封装所有的错误数据
    // 包括封装单元格
    private Object uploadErrorData (List<OltHosIcdDiseaseDto> errorList, OltHosIcdDiseaseDto dto) {
        Map<Integer, List<Integer>> map = new HashMap<>();
        LinkedList<OltHosIcdDiseaseDto> newErrorList = new LinkedList<>();
        if (CollectionUtils.isNotEmpty(errorList)) {
            for (int i = 0; i < errorList.size(); i++) {
                OltHosIcdDiseaseDto e = errorList.get(i);
                List<Integer> integerList = new ArrayList<>();
                if (e.getErrorReasonMap() != null && !e.getErrorReasonMap().isEmpty()) {
                    List<String> reasonList = new ArrayList<>();
                    for (Integer key: e.getErrorReasonMap().keySet()) {
                        // 标红单元格
                        integerList.add(key);
                        reasonList.add(e.getErrorReasonMap().get(key));
                    }
                    map.put(i + 2, integerList);
                    e.setErrorReason(String.join("、", reasonList));
                }
                newErrorList.add(e);
            }
        }
    
    
        // 封装导出服务入参
        String uuid = UUIDUtil.create();
        String errorFileName = dto.getHosName() + "(待处理诊断数据)" + dto.getStatDataStr() + ".xlsx";
        SysExportRecordDto sysExportRecordDto = SysExportRecordDto.builder().batchId(uuid).userId(dto.getUserId()).pfCode(dto.getPfCode())
                .source(dto.getSource()).fileName(errorFileName).creator(dto.getCreator()).operator(dto.getCreator()).build();
        // 创建导出记录
        QueueHandler.createTaskRecord(sysExportRecordDto);
        // 获取url
        // 伪代码
        String fileName = "aaa.xlsx";
        String BUCKET_NAME = "bbb";
        String fileUrl = String.format(OssClientConfig.OSS_REAL_PATH, BUCKET_NAME,
                UploadFileType.getFolderByType(UploadFileType.REPORT)).concat(fileName);
        // 加入异步线程任务
        this.exportTaskHandler.exportIcdErrorDiseaseData(OltErrorResult.builder().map(map).errorList(newErrorList)
                        .fileName(errorFileName).source(dto.getSource()).build(),
                uuid, errorFileName, fileUrl);
        // 构建返回队列信息
        return QueueHandler.buildQueueInfo(sysExportRecordDto);
    }
    
    private void batchOperateData() {
        checkErrorExcelList(tempErrorList, icdCodeList);
        checkSuccessExcelList(successList, tempErrorList, icdCodeList);
    
        // 将临时错误数据存储到所有错误数据列表
        this.errorList.addAll(tempErrorList);
        // 清理list
        this.successList.clear();
        this.tempErrorList.clear();
        this.countNum = 0;
    }
    
    private void checkExcelHead(Map<Integer, String> data) {
        boolean templateFlag = true;
        // 第二行  校验excel标题
        try {
            String diseaseCategoryStr = data.get(0);
            if (StringUtils.isBlank(diseaseCategoryStr) || !"诊eee(必填)".equals(diseaseCategoryStr)) {
                templateFlag = false;
            }
        } catch (Exception e) {
            templateFlag = false;
        }
        try {
            String icdNameStr = data.get(1);
            if (StringUtils.isBlank(icdNameStr) || !"医vv称(必填)".equals(icdNameStr)) {
                templateFlag = false;
            }
        } catch (Exception e) {
            templateFlag = false;
        }
        try {
            String icdCodeStr = data.get(2);
            if (StringUtils.isBlank(icdCodeStr) || !"医aa(必填)".equals(icdCodeStr)) {
                templateFlag = false;
            }
        } catch (Exception e) {
            templateFlag = false;
        }
        if (!templateFlag) {
            throw new PlatException("文件模版不匹配");
        }
    }
    
    private void checkReadData(Map<Integer, String> data) {
        // 循环cell
        OltHosIcdDiseaseDto temDisDto = OltHosIcdDiseaseDto.buildDefault();
        temDisDto.setHosId(diseaseDto.getHosId());
        // key为所在的列, value为错误原因
        Map<Integer, String> map = new HashMap<>();
        boolean flag = true;
        try {
            // 解析第二列
            String diseaseCategory = data.get(0);
            if (StringUtils.isBlank(diseaseCategory)) {
                temDisDto.setDiseaseCategoryStr(StringUtils.EMPTY);
                map.put(0, "aaa为空");
                flag = false;
            } else {
                temDisDto.setDiseaseCategoryStr(diseaseCategory);
            }
        } catch (Exception e) {
            temDisDto.setDiseaseCategoryStr(StringUtils.EMPTY);
            map.put(0, "bbb为空");
            flag = false;
        }
    
        try {
            String icdName = data.get(1);
            if (StringUtils.isBlank(icdName)) {
                temDisDto.setIcdName(StringUtils.EMPTY);
                map.put(1, "为空");
                flag = false;
            } else {
                temDisDto.setIcdName(icdName);
            }
        } catch (Exception e) {
            temDisDto.setIcdName(StringUtils.EMPTY);
            map.put(1, "ccc称为空");
            flag = false;
        }
    
        try {
            String icdCode = data.get(2);
            if (StringUtils.isBlank(icdCode)) {
                temDisDto.setIcdCode(StringUtils.EMPTY);
                map.put(2, "ddd为空");
                flag = false;
            } else {
                temDisDto.setIcdCode(icdCode);
            }
        } catch (Exception e) {
            temDisDto.setIcdCode(StringUtils.EMPTY);
            map.put(2, "ddd为空");
            flag = false;
        }
    
    
        try {
            if (!DiseaseCategory.TCM_SYNDROME.getDesc().equals(temDisDto.getDiseaseCategoryStr())) {
                String standardIcdName = data.get(3);
                if (isCfgPrd && StringUtils.isBlank(standardIcdName)) {
                    temDisDto.setStandardIcdName(StringUtils.EMPTY);
                    map.put(3, "vvv为空");
                    flag = false;
                } else {
                    temDisDto.setStandardIcdName(standardIcdName);
                }
            }
        } catch (Exception e) {
            temDisDto.setStandardIcdName(StringUtils.EMPTY);
            map.put(3, "vvv为空");
            flag = false;
        }
    
        try {
            if (!DiseaseCategory.TCM_SYNDROME.getDesc().equals(temDisDto.getDiseaseCategoryStr())) {
                String standardIcdCode = data.get(4);
                if (isCfgPrd && StringUtils.isBlank(standardIcdCode)) {
                    temDisDto.setStandardIcdCode(StringUtils.EMPTY);
                    map.put(4, "eee为空");
                    flag = false;
                } else {
                    temDisDto.setStandardIcdCode(standardIcdCode);
                }
            }
        } catch (Exception e) {
            temDisDto.setStandardIcdCode(StringUtils.EMPTY);
            map.put(4, "eee为空");
            flag = false;
        }
        temDisDto.setErrorReasonMap(map);
    
    
        // 如果flag为 false 说明数据有问题
        if (!flag) {
            tempErrorList.add(temDisDto);
        } else {
            successList.add(temDisDto);
        }
    }
    
    private void checkErrorExcelList(List<OltHosIcdDiseaseDto> errorList, List<String> icdCodeList) {
        if (CollectionUtils.isNotEmpty(errorList)) {
            // 错误就往里加, 正确重新定义列表
            errorList.forEach(e -> {
                Map<Integer, String> map = new HashMap<>();
                if (!DiseaseCategory.belongTo(e.getDiseaseCategoryStr())) {
                    map.put(0, "aaa不正确");
                } else {
                    e.setDiseaseCategory(DiseaseCategory.getCodeByDesc(e.getDiseaseCategoryStr()));
                }
                
                // excel是否存在重复数据
                if (checkRepeatCode.contains(e.getIcdCode())) {
                    map.put(2, "bbb重复");
                }
    
                if (CollectionUtils.isNotEmpty(icdCodeList) && icdCodeList.contains(e.getIcdCode())) {
                    map.put(2, "ttt重复");
                }
                if (e.getErrorReasonMap() != null && !e.getErrorReasonMap().isEmpty()) {
                    Map<Integer, String> errorReasonMap = e.getErrorReasonMap();
                    errorReasonMap.putAll(map);
                    e.setErrorReasonMap(errorReasonMap);
                }
                errorCount++;
            });
        }
    }
    
    /**
     * 侵入式给errorList赋值
     * @param list
     * @param errorList
     * @param icdCodeList
     */
    private void checkSuccessExcelList(List<OltHosIcdDiseaseDto> list, List<OltHosIcdDiseaseDto> errorList,
                                       List<String> icdCodeList) {
        List<OltHosIcdDiseaseDto> newList = new ArrayList<>();
    
        if (CollectionUtils.isNotEmpty(list)) {
            // 错误就往里加, 正确重新定义列表
            list.forEach(e -> {
                Map<Integer, String> map = new HashMap<>();
                boolean flag = false;
                // 判
                if (!DiseaseCategory.belongTo(e.getDiseaseCategoryStr())) {
                    map.put(0, "不正确");
                    flag = true;
                } else {
                    e.setDiseaseCategory(DiseaseCategory.getCodeByDesc(e.getDiseaseCategoryStr()));
                }
    
                // excel是否存在重复数据
                if (checkRepeatCode.contains(e.getIcdCode())) {
                    map.put(2, "重复");
                    flag = true;
                } else {
                    // 判断诊断编码
                    if (CollectionUtils.isNotEmpty(icdCodeList) && icdCodeList.contains(e.getIcdCode())) {
                        map.put(2, "重复");
                        flag = true;
                    }
                }
                e.setErrorReasonMap(map);
                if (flag) {
                    errorCount++;
                    errorList.add(e);
                } else {
                    e.setIcdPinyin(HzUtils.getPinyinCap(e.getIcdName(), HzUtils.CaseType.UPPERCASE));
                    e.setIcdWb(HzUtils.getWbCap(e.getIcdName(), HzUtils.CaseType.UPPERCASE));
                    newList.add(e);
                    checkRepeatCode.add(e.getIcdCode());
                    successCount++;
                }
            });
        }
    
        // 正确数据入库
        if (CollectionUtils.isNotEmpty(newList)) {
            oltConfigService.batchAddHosIcdDisease(delIcdCodeList, newList);
        }
    }

    其中,导入错误数据用了easyExcel的模版填充方式, 模版存于oss上

     
    
    /**
     * @author songhc
     * @create
     * @desc 导出错误数据
     **/
    @Async
    public void exportIcdErrorDiseaseData(OltErrorResult dto, String fileBatch, String fileName, String fileUrl) {
        Map<Integer, List> map = new HashMap<>();
        map.put(0, dto.getErrorList());
        Map<Integer, Map<Integer, List<Integer>>> styleMap = new HashMap<>();
        styleMap.put(0, dto.getMap());
        ExportExistHandler.exportExistTemplateData(map, styleMap, fileBatch, fileName, fileUrl);
    }

    接下来就是填充错误模版的实现

    /**
     * @param errorMap key为sheetNo, value为填充的数据
     * @param styleMap key为sheetNo, value为错误数据坐标
     * @param fileBatch 批次号
     * @param fileName 文件名
     * @param fileUrl 文件路径
     * @description 导出服务封装方法(无需分页查询, 数据为动态传入)
     * @className exportNoModelData
     */
    public static void exportExistTemplateData(Map<Integer, List> errorMap, Map<Integer, Map<Integer, List<Integer>>> styleMap, String fileBatch, String fileName, String fileUrl) {
        String ossFileName = fileName.substring(0, fileName.lastIndexOf('.'))
                .concat("-").concat(LocalDateTime.now()
                        .format(DateTimeFormatter.ofPattern("yyyyMMddHHmmss"))).concat(fileName.substring(fileName.lastIndexOf('.')));
        InputStream inputStream = HttpUtil.io(HttpUtil.Atom.builder().url(fileUrl).build());
        if (null == inputStream) {
            return;
        }
        String localFileName = String.format(TaskNoteHandler.staticExportConfig.getExportPath(), ossFileName);
        ExcelWriter excelWriter = null;
        int resultCount = 0;
        try {
            if (errorMap != null && !errorMap.isEmpty()) {
                excelWriter = EasyExcel.write(localFileName)
                            .withTemplate(inputStream)
                            .build();
                // for循环是一个excel可能有多个sheet的兼容写法
                for (Integer i: errorMap.keySet()) {
                    // 这里使用easyExcel的 registerWriteHandler 方法, 自定义CellColorSheetWriteHandler实现, 给每一个单元格填充颜色
                    WriteSheet writeSheet = EasyExcel.writerSheet(i).registerWriteHandler(new CellColorSheetWriteHandler(styleMap.get(i),
                            IndexedColors.RED1.getIndex())).build();
                    excelWriter.fill(errorMap.get(i), writeSheet);
                }
            }
        } catch (Exception e){
            LoggerUtil.error(LOGGER, "文件写入异常,error{0}", e);
            // 文件导出失败
            TaskNoteHandler.doUploadFailed(fileBatch, resultCount);
            return;
        } finally {
            // 关闭流
            if (excelWriter != null) {
                excelWriter.finish();
            }
        }
        // 1、上传文件(多种方案);2、更新记录
        TaskNoteHandler.doUploadAndNote(fileBatch, ossFileName, localFileName, resultCount);
    }
    /**
     * @description 自定义单元格格式拦截器
     * @className CellColorSheetWriteHandler
     * @package 
     * @Author songhc
     */
    public class CellColorSheetWriteHandler implements CellWriteHandler {
    
        /**
         * map
         * key:第i行
         * value:第i行中单元格索引集合
         */
        private Map<Integer, List<Integer>> map;
    
        /**
         * 颜色
         */
        private Short colorIndex;
    
        /**
         * 有参构造
         */
        public CellColorSheetWriteHandler(Map<Integer, List<Integer>> map, Short colorIndex) {
            this.map = map;
            this.colorIndex = colorIndex;
        }
    
        /**
         * 无参构造
         */
        public CellColorSheetWriteHandler() {
        }
    
        @Override
        public void beforeCellCreate(WriteSheetHolder writeSheetHolder, WriteTableHolder writeTableHolder, Row row, Head head, Integer integer, Integer integer1, Boolean aBoolean) {
        }
    
        /**
         * 在单元格创建后调用
         */
        @Override
        public void afterCellCreate(WriteSheetHolder writeSheetHolder, WriteTableHolder writeTableHolder, Cell cell, Head head, Integer integer, Boolean aBoolean) {
        }
    
        @Override
        public void afterCellDataConverted(WriteSheetHolder writeSheetHolder, WriteTableHolder writeTableHolder, CellData cellData, Cell cell, Head head, Integer integer, Boolean aBoolean) {
        }
    
        /**
         * 在单元上的所有操作完成后调用
         */
        @Override
        public void afterCellDispose(WriteSheetHolder writeSheetHolder, WriteTableHolder writeTableHolder, List<CellData> cellDataList, Cell cell, Head head, Integer relativeRowIndex, Boolean isHead) {
            //当前行的第i列
            int i = cell.getColumnIndex();
            // 根据单元格获取workbook
            Workbook workbook = cell.getSheet().getWorkbook();
            //不处理第一行
            if (0 != cell.getRowIndex()) {
                List<Integer> integerList = map.get(cell.getRowIndex());
                // 自定义单元格样式 
                if (CollectionUtils.isNotEmpty(integerList)) {
                    if (integerList.contains(i)) {
                        // 单元格策略
                        WriteCellStyle contentWriteCellStyle = new WriteCellStyle();
                        // 设置背景颜色白色
                        contentWriteCellStyle.setFillForegroundColor(colorIndex);
                        // 设置垂直居中为居中对齐
                         contentWriteCellStyle.setVerticalAlignment(VerticalAlignment.CENTER);
                        // 设置左右对齐为中央对齐
                         contentWriteCellStyle.setHorizontalAlignment(HorizontalAlignment.RIGHT);
                        // 设置单元格上下左右边框为细边框
                          contentWriteCellStyle.setBorderBottom(BorderStyle.THIN);
                          contentWriteCellStyle.setBorderLeft(BorderStyle.THIN);
                         contentWriteCellStyle.setBorderRight(BorderStyle.THIN);
                         contentWriteCellStyle.setBorderTop(BorderStyle.THIN);
                        // 创建字体实例
                         WriteFont cellWriteFont = new WriteFont();
                        // 设置字体大小
                         cellWriteFont.setFontName("宋体");
                         cellWriteFont.setFontHeightInPoints((short) 10);
                        //设置字体颜色
                        // cellWriteFont.setColor(IndexedColors.BLACK1.getIndex());
                        //单元格颜色
                        //contentWriteCellStyle.setFillForegroundColor(IndexedColors.GREY_25_PERCENT.getIndex());
                        contentWriteCellStyle.setWriteFont(cellWriteFont);
                        CellStyle cellStyle = StyleUtil.buildHeadCellStyle(workbook, contentWriteCellStyle);
                        //设置当前行第i列的样式
                        cell.getRow().getCell(i).setCellStyle(cellStyle);
                    }
                }
            }
        }
    }

    对于一个excel多sheet, 操作也是一样

    // 不同的是这里可以定义多个监听器
    // readSheet(0)  —-> 这里的数据代表sheet的位置
    OltDrugFrequencyListener oltDrugFrequencyListener = new OltDrugFrequencyListener(isCfgPrd, dfCodeList, frequencyDto, oltConfigService);
    OltDrugUsageListener oltDrugUsageListener = new OltDrugUsageListener(isCfgPrd, dUCodeList, OltDrugUsageDto.builder().hosId(frequencyDto.getHosId()).build(), oltConfigService);
    OltDrugDurationListener oltDrugDurationListener = new OltDrugDurationListener(durationCodeList, OltDrugDurationDefDto.builder().hosId(frequencyDto.getHosId()).build(), oltConfigService);
    
    ReadSheet readSheet = EasyExcel.readSheet(0).registerReadListener(oltDrugFrequencyListener).build();
    ReadSheet readSheet2 = EasyExcel.readSheet(2).registerReadListener(oltDrugUsageListener).build();
    ReadSheet readSheet4 = EasyExcel.readSheet(4).registerReadListener(oltDrugDurationListener).build();
    excelReader.read(readSheet, readSheet2, readSheet4);

    经过测试, 该方法导出2W条数据差不多需要10秒, 也不会影响内存

    展开全文
  • /** * 组织导入线程类 * @author wdy * */ @Service public class OperationOrgsService { private static final Logger log = LoggerFactory.getLogger(OperationOrgsService.class); //线程计数器 private ...
    package com.rk.iam.sys.service;
    
    /**
     * @author wdy
     * @version 1.0
     * @date 2022/4/27 14:50
     */
    
    import com.alibaba.fastjson.JSON;
    import com.alibaba.fastjson.serializer.SerializerFeature;
    import com.baomidou.mybatisplus.core.conditions.query.LambdaQueryWrapper;
    import com.rk.iam.sys.common.vo.SysOrgVO;
    import com.rk.iam.sys.entity.SysOrg;
    import com.rk.unified.common.utils.JwtRequestUtils;
    import com.rk.unified.common.vo.Cuser;
    import com.sun.org.slf4j.internal.Logger;
    import com.sun.org.slf4j.internal.LoggerFactory;
    import org.apache.commons.collections4.CollectionUtils;
    import org.springframework.beans.factory.annotation.Autowired;
    import org.springframework.stereotype.Service;
    
    import java.util.ArrayList;
    import java.util.HashMap;
    import java.util.List;
    import java.util.Map;
    import java.util.concurrent.CountDownLatch;
    import java.util.concurrent.ExecutorService;
    import java.util.concurrent.Executors;
    
    /**
     * 组织导入线程类
     * @author wdy
     *
     */
    @Service
    public class OperationOrgsService {
    
        private static final Logger log = LoggerFactory.getLogger(OperationOrgsService.class);
        //线程计数器
        private CountDownLatch threadsSignal;
        //每个线程处理的数据量
        private static final int count=1000;
        @Autowired
        private SysOrgService sysOrgService;
        @Autowired
        private JwtRequestUtils jwtRequestUtils;
        //定义线程池数量为8,每个线程处理1000条数据
        private static ExecutorService execPool = Executors.newFixedThreadPool(8);
    
        /**
         * 多线程批量执行插入
         * @return
         */
        public String batchAddData(List<SysOrgVO> list) throws InterruptedException, Exception {
               Map<Integer,String> response=new HashMap<>();
            if(list.size()<=count) {
                threadsSignal=new CountDownLatch(1);
                execPool.submit(new InsertDate(list));
            }else {
                List<List<SysOrgVO>> li=createList(list, count);
                threadsSignal=new CountDownLatch(li.size());
                for(List<SysOrgVO> liop:li) {
                    execPool.submit(new InsertDate(liop));
                }
            }
            response.put(0,"success");
            threadsSignal.await();
            return JSON.toJSONString(response, SerializerFeature.WriteMapNullValue);
        }
    
        /**
         * 数据拆分
         * @param targe
         * @param size
         * @return
         */
        public static List<List<SysOrgVO>>  createList(List<SysOrgVO> targe,int size) {
            List<List<SysOrgVO>> listArr = new ArrayList<List<SysOrgVO>>();
            //获取被拆分的数组个数
            int arrSize = targe.size()%size==0?targe.size()/size:targe.size()/size+1;
            for(int i=0;i<arrSize;i++) {
                List<SysOrgVO>  sub = new ArrayList<SysOrgVO>();
                //把指定索引数据放入到list中
                for(int j=i*size;j<=size*(i+1)-1;j++) {
                    if(j<=targe.size()-1) {
                        sub.add(targe.get(j));
                    }
                }
                listArr.add(sub);
            }
            return listArr;
        }
    
    
        /**
         * 内部类,开启线程批量保存数据
         * @author wdy
         *
         */
        class  InsertDate  extends Thread{
            List<SysOrgVO> lientity=new ArrayList<SysOrgVO>();
            Cuser cuser = jwtRequestUtils.getCuser();
            public  InsertDate(List<SysOrgVO> limodel){
                limodel.forEach((model)->{
                    lientity.add(model);
                });
            }
            @Override
            public void run() {
                create(lientity, cuser);
                threadsSignal.countDown();
            }
        }
    
        /*
        *
        * 保存数据到数据库
        * */
        public void create(List<SysOrgVO> sysOrgVOList, Cuser cuser) {
            sysOrgVOList.forEach(sysOrgVO -> {
                String name = sysOrgVO.getName();
                Long id = 0L;
                List<SysOrg> sysOrgList = new ArrayList<>();
                boolean flag = false;
                if (name.contains("/")) {
                    String[] split = name.split("/");
                    if (split.length > 0) {
                        for (int i = 0; i < split.length; i++) {
                            sysOrgVO.setCid(cuser.getCid());
                            sysOrgVO.setRemark("批量导入组织");
                            if (i == 0) {
                                List<SysOrg> list = sysOrgService.lambdaQuery().eq(SysOrg::getCid, cuser.getCid()).eq(SysOrg::getName, split[i]).list();
                                if (CollectionUtils.isEmpty(list)) {
                                    sysOrgVO.setParentId(0L);
                                    sysOrgVO.setName(split[i]);
                                    SysOrg sysOrg = sysOrgService.convertToEntity(sysOrgVO);
                                    sysOrgService.save(sysOrg);
                                    id = sysOrg.getId();
                                } else {
                                    /*
                                     * 获取根目录下父级目录
                                     * */
                                    for (SysOrg sysOrg : list) {
                                        sysOrgList = sysOrgService.getBaseMapper()
                                                .selectList(new LambdaQueryWrapper<SysOrg>()
                                                        .eq(SysOrg::getCid, cuser.getCid())
                                                        .eq(SysOrg::getParentId, sysOrg.getId()));
    
                                    }
                                    id = list.get(0).getId();
                                }
                            } else {
                                if (CollectionUtils.isNotEmpty(sysOrgList)) {
                                    for (SysOrg sysOrg : sysOrgList) {
                                        if (split[i].equals(sysOrg.getName())) {
                                            flag = true;
                                            break;
                                        }
                                    }
    
                                }
    
                                if (!flag) {
                                    sysOrgVO.setParentId(id);
                                    sysOrgVO.setName(split[i]);
                                    SysOrg sysOrg = sysOrgService.convertToEntity(sysOrgVO);
                                    sysOrgService.save(sysOrg);
                                    id = sysOrg.getId();
                                }
                            }
                        }
                    }
                }
            });
        }
    }

    展开全文
  • 批量CSV文件导入数据库

    千次阅读 2018-09-04 11:49:29
    查了很多如何从csv导入sqlserver,,, 几种方案: 1. bulkinsert 2. dts import 3. 自己写一个   上面的方案 ,优缺点都有。 1. 需要自己手动建表,无法自适应csv的动态表结构,,优点是速度快。 2. 只能...
  • 上传CSV文件,并分批导入至数据库

    千次阅读 2018-01-15 12:01:00
    外部导入csv文件,将数据解析并插入到mysql数据库 2.项目环境 spring spring mvc mybatis 3.解决方法 (1) 上传并读取csv文件 /** * @TODO spring mvc 方式文件上传 * @param multipartFile * @param...
  • 注意:本文出自 “阿飞”的博客 ,如果要转载本文章,请与作者联系!...如果大家有更好的方式,请分享:)代码如下:$db_host="192.168.1.10";$db_user="root";$db_psw="11111";$db_name="csvimport";...
  • 为了成功将CSV文件里的数据导入数据库,分批处理是非常必要的。 下面这个函数是读取CSV文件中指定的某几行数据: /** * csv_get_lines 读取CSV文件中的某几行数据 * @param $csvfile csv文件路径 * @param $...
  • 2019年 PHP Excel导入导出 CSV导入导出,thinkphp Excel导入导出 CSV导入导出
  • 1.想把814G的json文件导入mongoDB中,抽取想要数据的三元组。(814G的json文件来自与wikidata中的数据) 2.或者怎么用json数据解析法抽取其中的三元组。 (问题难点都是数据太大内存不够,需要分批处理,怎么分批...
  • 应用场景:分批读取共有358086行内容的txt文件,每取1000条输出到一个文件当中 # coding=utf-8 # 分批读取共有358086行内容的txt文件,每取1000条输出到一个文件当中 txt_path = E:/torrenthandle.txt base_path=E:...
  • mysql分批导出数据和分批导入数据库

    千次阅读 2019-05-24 16:17:00
    mysql分批导出数据和分批导入数据库  由于某些原因,比如说测试环境有很多库,需要迁移到新的环境中,不需要导出系统库的数据。而数据库又有好多,如何才能将每个库导出到独立的文件中呢?导入到一个文件的话,又...
  • Db2 的数据导入导出,命令貌似简单但是不常使用的话会遇到很多坑,如下命令留作笔记备查; 导出:Export to USER.del for del select * from USER where flag='1'; 导入: Import from USER.del of del insert into ...
  • mysql批量导入sql文件

    2021-01-18 18:45:42
    比较好的办法仍是用mysql的source命令:一、在客户端下操作:1、进行入客户端d:xampp\cd mysqld: xampp\mysql\cd bind: xampp\mysql\bin\mysql -h localhost -uroot -p2、mysql>... (一般看导入的是什么...
  • /*************PHP导入.sql文件运行版本:php5,php4作者:panxp邮件:coolpan123@gmail.com*编辑整理:bbs.it-home.org**************/$host = "localhost";$user = "root";$pwd = "";$file_dir = dirname(__FILE__);$...
  • 当php在safe mode模式下无效,此时可能会导致导入超时,此时需要分段导入$db = new mysql($location['host'],$location['hostname'],$location['hostpass'],$location['table'],"utf8",$location['ztime']...
  • 在 Eclipse 中导入 Java 程序,分导入Java Project和导入Java源程序 两种情况,简述如下。情况一:导入 Java ProjectStep1 、 File -> Import,Step2 、选择 Existing Projects into Workspace, 并点击 Next 。...
  • DB2数据导入命令:load和import

    千次阅读 2021-01-12 12:32:43
    最近,还是项目的需要,需要从一个库导出数据,并且导入到另一个库中。使用的数据库为DB2.网上有很多load和import命令的比较,这里我就说下本人使用过后的真实体验和效率比较。使用load命令,首先要明白当前你要导入...
  • 这几天在做项目时,遇到了需要批量导入数据的情况,用户将excel表格提交后,需要我们后台这边将excel表格信息中的内容全部插入到数据表中。当然,前提是用户给我们的excel表格中的信息必须和我们表中的字段信息时...
  • 业务场景需求,需要将生成的100多个RPT格式的数据文件导入oracle数据库,并做成自动化。 需求梳理: 1.oracle客户端自带sqlldr工具可以实现文件导入导出 2.sqlldr的控制文件可以实现多个文件同时导入 3.控制文件的...
  • 首先如果要从外部文件中获取数据,就涉及到属性字段的对应关系,请看下图,每个属性都需要加上@ExcelProperty()注解,目的是为了和文件的字段进行对应, @Data public class OrderExcelImportDTO extends ...
  • HIVE 导入导出文件

    千次阅读 2022-04-11 19:38:21
    HIVE 导入导出文件导入文件导出文件表中的数据到本地文件表中数据到 HDFS 参考: HIVE 导出文件 导入文件 导出文件 表中的数据到本地文件 其实在实际开发场景,一些关键性的数据可能还是需要我们导出来,然后下发给...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 4,454
精华内容 1,781
关键字:

文件分批导入