精华内容
下载资源
问答
  • 奥比中光英文官网下载的windows环境下64位电脑使用的SDK压缩包
  • 自动爬取每日天气、每日微博热搜、每日外文网数据(附带自动翻译)数据,并以文本形式邮件发送 项目特点: 自动爬取中国天气网指定城市的一周天气; 自动爬取每日微博热搜的热搜标题以及链接; 自动爬取外文网...

    自动爬取每日天气、每日微博热搜、每日外文网数据(附带自动翻译)数据,并以文本形式邮件发送

    项目特点:

    1. 自动爬取中国天气网指定城市的一周天气;
    2. 自动爬取每日微博热搜的热搜标题以及链接;
    3. 自动爬取外文网http://conflictoflaws.net/的每日推荐标题及链接,并同时将标题处理为中英对照,翻译引擎为有道在线翻译
    4. 将所爬取的所有数据整合为文本形式分别保存到本地并以邮件发送到目标邮箱。

    具体实现代码如下:

    1. 主体逻辑代码如下:

    def get_text():
        headers = {
            'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.25 Safari/537.36 Core/1.70.3722.400 QQBrowser/10.5.3738.400'
        }
        url = {
            'weibo': 'https://s.weibo.com/top/summary?cate=realtimehot',
            'tianqi': 'http://www.weather.com.cn',
            'law': 'http://conflictoflaws.net',
        }
    
        weibo  = get_weibo(url, headers)
        tianqi = get_tianqi(url, headers)
        laws   = get_law(url, headers)
        for item in laws:
            law = item
        
        text = oprate(weibo, tianqi, law)
        return text
    
    def main():
        print('|============正在搜集数据===========|')
        text = get_text()
        print('|======搜索完成,正在更新旧数据=====|')
        os.remove('text.txt')
        time.sleep(3)
        for each in text:
            with open('text.txt', 'a+', encoding= 'utf-8') as f:
                f.write(each + '\n')
        
        print('|==============准备发送=============|')
        with open('text.txt', 'r', encoding='utf-8') as f:
            string = f.read()
            time.sleep(5)
    
        try_max = 1
        while try_max < 6:
            try:
                from_addr = 'xxxx@126.com'
                password = 'xxxx'
                to_addr = ['xxxx@qq.com', 'xxxx@126.com', 'xxxx@qq.com']
                smtp_server = 'smtp.126.com'
    
                message = MIMEText(string, 'plain', 'utf-8')
                message['From'] = 'xxxx <xxxx@126.com>'
                message['To'] = 'Little Pig <SuperUser@qq.com>'
                message['Subject'] = Header(u'阿光每日小报', 'utf-8').encode()
    
                server = smtplib.SMTP(smtp_server, 25)
                server.set_debuglevel(1)
                server.login(from_addr, password)
                server.sendmail(from_addr, to_addr, message.as_string())
                server.quit()
            except SMTPDataError:
                print('|====发送失败,正在尝试重发第%d次====|' % try_max)
                try_max += 1
                time.sleep(3)
            else:
                print('|===========邮件发送完成============|')
                time.sleep(5)
                break
    
    if __name__ == '__main__':
        main()
    

    2. 自动爬取中国天气网指定城市(兰州,长沙,南京,海南)的一周天气

    def get_tianqi(url, headers):
        lanzhou_url = url.get('tianqi') + '/weather/101160101.shtml'
        changsha_url = url.get('tianqi') + '/weather/101250101.shtml'
        nanjing_url = url.get('tianqi') + '/weather/101190101.shtml'
        hainan_url = url.get('tianqi') + '/weather/101310101.shtml'
        url_pool = [lanzhou_url, changsha_url, nanjing_url, hainan_url]
        weathers = []
        for item in url_pool:
            weather = []
            html = requests.get(item, headers).content.decode('utf-8')
            soup = BeautifulSoup(html, 'html.parser')
            day_list = soup.find('ul', 't clearfix').find_all('li')
            for day in day_list:
                date = day.find('h1').get_text()
                wea = day.find('p',  'wea').get_text()
                if day.find('p', 'tem').find('span'):
                        hightem = day.find('p', 'tem').find('span').get_text()
                else:
                        hightem = ''
                lowtem = day.find('p', 'tem').find('i').get_text()
                weather.append([date, wea, hightem, lowtem])
            weathers.append(weather)
        return weathers
    

    3. 自动爬取每日微博热搜的热搜标题以及链接

    def get_weibo(url, headers):
        weibo_text = []
        weibo_url = url.get('weibo')
        html = requests.get(weibo_url, headers = headers).content.decode('utf-8')
        titles_links = re.findall(r'<td class=.*?>.*?<a href="(.*?)" target=.*?>(.*?)</a>.*?</td>', html, re.S)
        for title_link in titles_links:
            weibo_text.append({
            'title': title_link[1],
            'link' : 'https://s.weibo.com' + title_link[0]
            })
        return weibo_text
    

    4. 自动爬取外文网http://conflictoflaws.net/的每日推荐标题及链接

    def get_law(url, headers):
        html = requests.get(url.get('law'), headers = headers).content.decode('utf-8')
        soup = BeautifulSoup(html, 'html.parser')
    
        views = []
        view_list = soup.find_all('h2', 'headline')
        for view in view_list:
            view_title = view.find('a').get_text()
            view_a = view.find('a')
            view_link = view_a['href']
            views.append([view_title, view_link])
    
        news = []
        new_list = soup.find_all('p', 'pis-title')
        for new in new_list:
            new_title = new.find('a').get_text()
            new_a = new.find('a')
            new_link = new_a['href']
            news.append([new_title, new_link])
        
        yield{
            'view': views,
            'new': news
        }
    

    5. 衔接上一步,将标题处理为中英对照,翻译引擎为有道在线翻译

    def law_translate(law):
        url = "http://fanyi.youdao.com/translate?smartresult=dict&smartresult=rule"
        head = {}
        head['User-Agent'] = 'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.25 Safari/537.36 Core/1.70.3722.400 QQBrowser/10.5.3738.400'
    
        view_str = []
        view_content = law.get('view')
        for v_content in view_content:
            view_str.append(v_content[0])
    
        new_str = []
        new_content = law.get('new')
        for n_content in new_content:
            new_str.append(n_content[0])
    
        translation = []
        for each in view_str:
            data = {}
            data['i'] = each
            data['from'] = 'AUTO'
            data['to'] = 'AUTO'
            data['smartresult'] = 'dict'
            data['client'] = 'fanyideskweb'
            data['salt'] = '15658686268937'
            data['sign'] = 'ee53369f775bc53f8be7328d3afb3631'
            data['ts'] = '1565868626893'
            data['bv'] = 'b9bd10e2943f377d66e859990bbee707'
            data['doctype'] = 'json'
            data['version'] = '2.1'
            data['keyfrom'] = 'fanyi.web'
            data['action'] = 'FY_BY_REALTlME'
    
            data = urllib.parse.urlencode(data).encode('utf-8')
            req = urllib.request.Request(url, data, head)
            response = urllib.request.urlopen(req)
            html = response.read().decode('utf-8')
            target = json.loads(html)
            result = target['translateResult'][0][0]['tgt']
            translation.append(result)
        
        for each in new_str:
            data = {}
            data['i'] = each
            data['from'] = 'AUTO'
            data['to'] = 'AUTO'
            data['smartresult'] = 'dict'
            data['client'] = 'fanyideskweb'
            data['salt'] = '15658686268937'
            data['sign'] = 'ee53369f775bc53f8be7328d3afb3631'
            data['ts'] = '1565868626893'
            data['bv'] = 'b9bd10e2943f377d66e859990bbee707'
            data['doctype'] = 'json'
            data['version'] = '2.1'
            data['keyfrom'] = 'fanyi.web'
            data['action'] = 'FY_BY_REALTlME'
    
            data = urllib.parse.urlencode(data).encode('utf-8')
            req = urllib.request.Request(url, data, head)
            response = urllib.request.urlopen(req)
            html = response.read().decode('utf-8')
            target = json.loads(html)
            result = target['translateResult'][0][0]['tgt']
            translation.append(result)
    
        law_translate = []
        i = 0
        for l in law.get('view'):
            law_translate.append(l[0])
            law_translate.append(translation[i])
            law_translate.append(l[1])
            i += 1
        for l in law.get('new'):
            law_translate.append(l[0])
            law_translate.append(translation[i])
            law_translate.append(l[1])
            i += 1
        return law_translate
    

    6. 将所爬取的所有数据整合

    def oprate(weibo, tianqi, law):
        def law_translate(law):
            (code...)
            
        law_translate = law_translate(law)
      
        weibo_text = []
        for w in weibo:
            w_text = '%s>>>%s' % (w.get('title'), w.get('link'))
            weibo_text.append(w_text)
    
        tianqi_city = []
        for t in tianqi[0]: 
            t_lanzhou = '兰州%s天气为%s,最高气温%s,最低气温%s' % (t[0], t[1], t[2], t[3])
            tianqi_city.append(t_lanzhou)
        for t in tianqi[1]:
            t_changsha = '长沙%s天气为%s,最高气温%s,最低气温%s' % (t[0], t[1], t[2], t[3])
            tianqi_city.append(t_changsha)
        for t in tianqi[2]:
            t_nanjing = '南京%s天气为%s,最高气温%s,最低气温%s' % (t[0], t[1], t[2], t[3])
            tianqi_city.append(t_nanjing)
        for t in tianqi[3]:
            t_hainan = '海南%s天气为%s,最高气温%s,最低气温%s' % (t[0], t[1], t[2], t[3])
            tianqi_city.append(t_hainan)
    
        text = ['【天气】\n']
        for each in tianqi_city:
            text.append(each)
        text.append('\n【微博热搜】\n')
        for each in weibo_text:
            text.append(each)
        text.append('\n【Conflict of Laws】\n')
        for each in law_translate:
            text.append(each)
        return text
    

    综合上述代码即可实现该项目,效果如下图:

    启动程序即可自动搜集数据并发送邮件
    在这里插入图片描述
    本地文档如下:
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    目标邮箱邮件如下:
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述

    完整代码如下:

    from email.header import Header
    from email.mime.text import MIMEText
    import smtplib
    from smtplib import SMTPDataError
    import requests
    import re
    from bs4 import BeautifulSoup
    import urllib.request
    import urllib.parse
    import json
    import time
    import os 
    
    def get_weibo(url, headers):
        weibo_text = []
        weibo_url = url.get('weibo')
        html = requests.get(weibo_url, headers = headers).content.decode('utf-8')
        titles_links = re.findall(r'<td class=.*?>.*?<a href="(.*?)" target=.*?>(.*?)</a>.*?</td>', html, re.S)
        for title_link in titles_links:
            weibo_text.append({
            'title': title_link[1],
            'link' : 'https://s.weibo.com' + title_link[0]
            })
        return weibo_text
        # [{title,link}{title,link}...{title,link}]
    
    def get_tianqi(url, headers):
        lanzhou_url = url.get('tianqi') + '/weather/101160101.shtml'
        changsha_url = url.get('tianqi') + '/weather/101250101.shtml'
        nanjing_url = url.get('tianqi') + '/weather/101190101.shtml'
        hainan_url = url.get('tianqi') + '/weather/101310101.shtml'
        url_pool = [lanzhou_url, changsha_url, nanjing_url, hainan_url]
        weathers = []
        for item in url_pool:
            weather = []
            html = requests.get(item, headers).content.decode('utf-8')
            soup = BeautifulSoup(html, 'html.parser')
            day_list = soup.find('ul', 't clearfix').find_all('li')
            for day in day_list:
                date = day.find('h1').get_text()
                wea = day.find('p',  'wea').get_text()
                if day.find('p', 'tem').find('span'):
                        hightem = day.find('p', 'tem').find('span').get_text()
                else:
                        hightem = ''
                lowtem = day.find('p', 'tem').find('i').get_text()
                weather.append([date, wea, hightem, lowtem])
            weathers.append(weather)
        return weathers
        #[[[]*7]*4] item[0]表示各地一周天气 item[0][0]表示该地当日天气
    
    def get_law(url, headers):
        html = requests.get(url.get('law'), headers = headers).content.decode('utf-8')
        soup = BeautifulSoup(html, 'html.parser')
    
        views = []
        view_list = soup.find_all('h2', 'headline')
        for view in view_list:
            view_title = view.find('a').get_text()
            view_a = view.find('a')
            view_link = view_a['href']
            views.append([view_title, view_link])
    
        news = []
        new_list = soup.find_all('p', 'pis-title')
        for new in new_list:
            new_title = new.find('a').get_text()
            new_a = new.find('a')
            new_link = new_a['href']
            news.append([new_title, new_link])
        
        yield{
            'view': views,
            'new': news
        }
        # {'view':[[], [], ...[]],'new':[[], [], ...[]]}
    
    def oprate(weibo, tianqi, law):
        def law_translate(law):
            url = "http://fanyi.youdao.com/translate?smartresult=dict&smartresult=rule"
            head = {}
            head['User-Agent'] = 'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.25 Safari/537.36 Core/1.70.3722.400 QQBrowser/10.5.3738.400'
    
            view_str = []
            view_content = law.get('view')
            for v_content in view_content:
                view_str.append(v_content[0])
    
            new_str = []
            new_content = law.get('new')
            for n_content in new_content:
                new_str.append(n_content[0])
    
            translation = []
            for each in view_str:
                data = {}
                data['i'] = each
                data['from'] = 'AUTO'
                data['to'] = 'AUTO'
                data['smartresult'] = 'dict'
                data['client'] = 'fanyideskweb'
                data['salt'] = '15658686268937'
                data['sign'] = 'ee53369f775bc53f8be7328d3afb3631'
                data['ts'] = '1565868626893'
                data['bv'] = 'b9bd10e2943f377d66e859990bbee707'
                data['doctype'] = 'json'
                data['version'] = '2.1'
                data['keyfrom'] = 'fanyi.web'
                data['action'] = 'FY_BY_REALTlME'
        
                data = urllib.parse.urlencode(data).encode('utf-8')
                req = urllib.request.Request(url, data, head)
                response = urllib.request.urlopen(req)
                html = response.read().decode('utf-8')
                target = json.loads(html)
                result = target['translateResult'][0][0]['tgt']
                translation.append(result)
            
            for each in new_str:
                data = {}
                data['i'] = each
                data['from'] = 'AUTO'
                data['to'] = 'AUTO'
                data['smartresult'] = 'dict'
                data['client'] = 'fanyideskweb'
                data['salt'] = '15658686268937'
                data['sign'] = 'ee53369f775bc53f8be7328d3afb3631'
                data['ts'] = '1565868626893'
                data['bv'] = 'b9bd10e2943f377d66e859990bbee707'
                data['doctype'] = 'json'
                data['version'] = '2.1'
                data['keyfrom'] = 'fanyi.web'
                data['action'] = 'FY_BY_REALTlME'
        
                data = urllib.parse.urlencode(data).encode('utf-8')
                req = urllib.request.Request(url, data, head)
                response = urllib.request.urlopen(req)
                html = response.read().decode('utf-8')
                target = json.loads(html)
                result = target['translateResult'][0][0]['tgt']
                translation.append(result)
    
            law_translate = []
            i = 0
            for l in law.get('view'):
                law_translate.append(l[0])
                law_translate.append(translation[i])
                law_translate.append(l[1])
                i += 1
            for l in law.get('new'):
                law_translate.append(l[0])
                law_translate.append(translation[i])
                law_translate.append(l[1])
                i += 1
            return law_translate
    
        law_translate = law_translate(law)
      
        weibo_text = []
        for w in weibo:
            w_text = '%s>>>%s' % (w.get('title'), w.get('link'))
            weibo_text.append(w_text)
    
        tianqi_city = []
        for t in tianqi[0]: 
            t_lanzhou = '兰州%s天气为%s,最高气温%s,最低气温%s' % (t[0], t[1], t[2], t[3])
            tianqi_city.append(t_lanzhou)
        for t in tianqi[1]:
            t_changsha = '长沙%s天气为%s,最高气温%s,最低气温%s' % (t[0], t[1], t[2], t[3])
            tianqi_city.append(t_changsha)
        for t in tianqi[2]:
            t_nanjing = '南京%s天气为%s,最高气温%s,最低气温%s' % (t[0], t[1], t[2], t[3])
            tianqi_city.append(t_nanjing)
        for t in tianqi[3]:
            t_hainan = '海南%s天气为%s,最高气温%s,最低气温%s' % (t[0], t[1], t[2], t[3])
            tianqi_city.append(t_hainan)
    
        text = ['【天气】\n']
        for each in tianqi_city:
            text.append(each)
        text.append('\n【微博热搜】\n')
        for each in weibo_text:
            text.append(each)
        text.append('\n【Conflict of Laws】\n')
        for each in law_translate:
            text.append(each)
        return text
    
    def get_text():
        headers = {
            'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.25 Safari/537.36 Core/1.70.3722.400 QQBrowser/10.5.3738.400'
        }
        url = {
            'weibo': 'https://s.weibo.com/top/summary?cate=realtimehot',
            'tianqi': 'http://www.weather.com.cn',
            'law': 'http://conflictoflaws.net',
        }
    
        weibo  = get_weibo(url, headers)
        tianqi = get_tianqi(url, headers)
        laws   = get_law(url, headers)
        for item in laws:
            law = item
        
        text = oprate(weibo, tianqi, law)
        return text
    
    def main():
        print('|============正在搜集数据===========|')
        text = get_text()
        print('|======搜索完成,正在更新旧数据=====|')
        os.remove('text.txt')
        time.sleep(3)
        for each in text:
            with open('text.txt', 'a+', encoding= 'utf-8') as f:
                f.write(each + '\n')
        
        print('|==============准备发送=============|')
        with open('text.txt', 'r', encoding='utf-8') as f:
            string = f.read()
            time.sleep(5)
    
        try_max = 1
        while try_max < 6:
            try:
                from_addr = 'xxxx@126.com'
                password = 'xxxx'
                to_addr = ['xxxx@qq.com', 'xxxx@126.com', 'xxxx@qq.com']
                smtp_server = 'smtp.126.com'
    
                message = MIMEText(string, 'plain', 'utf-8')
                message['From'] = 'xxxx <xxxx@126.com>'
                message['To'] = 'Little Pig <SuperUser@qq.com>'
                message['Subject'] = Header(u'阿光每日小报', 'utf-8').encode()
    
                server = smtplib.SMTP(smtp_server, 25)
                server.set_debuglevel(1)
                server.login(from_addr, password)
                server.sendmail(from_addr, to_addr, message.as_string())
                server.quit()
            except SMTPDataError:
                print('|====发送失败,正在尝试重发第%d次====|' % try_max)
                try_max += 1
                time.sleep(3)
            else:
                print('|===========邮件发送完成============|')
                time.sleep(5)
                break
    
    if __name__ == '__main__':
        main()
    

    谢谢支持,希望与大家共同进步!!!

    展开全文
  • 电子商务,是全球贸易自由化的平台,... 万国商业 http://www.busytrade.com  中欧在线  http://www.eucnco.com/cn/cn-index.htm  专业性多语种b2b网站  中国建材交易总网 http://www.bmbtob.com/  

     电子商务,是全球贸易自由化的平台,毋庸置疑已成为各国商务发展的一大趋势,而随着电子商务平台在全球化贸易中所扮演的角色越来越重要,多语种b2b电子商务平台作为这一大趋势的必然产物而应运而生。多语言能够为世界各地的企业提供一个母语化的环境,使网站达到简易可读的效果,促进信息的传播于交换,这样产生出来的便利性和亲和力才能将各国市场有机地融为一体,对国际贸易产生了深远的影响。

        多语言的突出优点不言而喻,那么在设计和开发一个多语种版本的网站前,有四大难题是亟需解决的。

        首先必须考虑的是书写的习惯。要知道,阿拉伯语,波斯语和希伯来语等语言书写习惯是从右到左的,而不是我们习惯性的从左到右,相应地,如果你在网站导航结构设计中使用的是导航栏,就必须置在右边,千万别使用你的“惯性思维”了。

        其次就是解决从技术上如何实现对不同语言数据信息的收集和检索这一难题,因为当客户留言并向数据库添加信息时,网站要能够对这些信息的收集。

        再次就是选择字符集的问题。我们日常使用的是简体中文(GB2312)字符集,而在计算机应用领域中存在着几十种互不相同的字符集,所以对多语言网站来说,很容易会出现相互间因所使用字符集无法兼容而乱码的情况,这是必须注意和解决的。

        最后就是搜索引擎优化的难题。要知道并非所有的主流搜索引擎都会支持多语言网页,这也就是为什么一个多语言的网站不一定被所有的主流搜索引擎收录,因此,我们要清楚不同语言所面向的市场目标和客户群体习惯使用的是什么搜索引擎或门户网站。

        上文很简单粗略地列出了多语言网站所需要解决的四大难题,电子商务外文网站只有很好地攻克这些障碍,才能使自己的网站能够顺利迅速地发展,发挥多语言网站得天独厚的优势和独一无二的特点,最终达到自己理想的效果和远大的目标。

    另附b2b电子商务外文网站例子:

        综合性多语种b2b网站

        万国商业网http://www.busytrade.com

        中欧在线  http://www.eucnco.com/cn/cn-index.htm

        专业性多语种b2b网站

        中国建材交易总网http://www.bmbtob.com/

     
    展开全文
  • 外文翻译僵尸网络

    2015-03-18 19:46:46
    南京邮电大学外文翻译 毕业设计
  • 清华大学出版社 fourth Edition
  • 基于网络聊天室的外文文献 及外文翻译 !!!
  • computer network 计算机 网络 外文文献990 Ind. Eng. Chem. Res. 2004, 43, 990 1002PROCESS DESIGN AND CONTROLComputer-Aided Simulation Model for Natural Gas PipelineNetwor...

    computer network 计算机 网络 外文文献

    990 Ind. Eng. Chem. Res. 2004, 43, 990 1002

    PROCESS DESIGN AND CONTROL

    Computer-Aided Simulation Model for Natural Gas Pipeline

    Network System Operations

    Panote Nimmanonda, Varanon Uraikul, Christine W. Chan, and

    Paitoon Tontiwachwuthikul*

    Faculty of Engineering, University of Regina, Regina, Saskatchewan, Canada S4S 0A2

    This paper presents the development of a computer-aided simulation model for natural gas

    pipeline network system operations. The simulation model is a useful tool for simulating and

    analyzing the behavior of natural gas pipeline systems under different operating conditions.

    Historical data and knowledge of natural gas pipeline system operations are crucial information

    used in formulating the simulation model. This model incorporates the natural gas properties,

    energy balance, and mass balance that lay the foundation of knowledge for natural gas pipeline

    network systems. The user can employ the simulation model to create a natural gas pipeline

    network system, selecting the components of natural gas, pipe diameters, and compressor

    capacities for different seasons. Because the natural gas consumption rate continuously varies

    with time, the dynamic simulation model was built to display state variables of the natural gas

    pipeline system and to provide guidance to the users on how to operate the system properly.

    The simulation model was implemented on Flash (Macromedia) and supports use of the

    simulation model on the Internet. The model was tested and validated using the data from the

    St. Louis East system, which is a subsystem of the natural gas pipeline network system of

    展开全文
  • 完整的关于计算机网络的外文文献翻译,包含英文原文和中文翻译,很适用于毕业设计的翻译文献。是我花钱从网上下载下来的,与大家分享。
  • 计算机毕业翻译文献,是关于网络的,外文原文,很经典,从外国网站下载的
  • Web的扩大和动态性带来了巨大的挑战大多数数据挖掘技术,尝试 从网络中提取数据,如Web使用和Web内容,图案。而可伸缩的数据挖掘方法,预计 应付挑战的大小,与演变中的一个连续地应对在嘈杂的数据趋势,且无任何...
  • 网络信息安全外文文献翻译?(含:英文原文及中文译文)文献出处: Science & Technology Information, 12(2):31-41英文原文Security of Computer Network SystemHenny JoneAbstract:This paper discussed the ...

    网络信息安全外文文献翻译?(含:英文原文及中文译文)

    文献出处: Science & Technology Information, 12(2):31-41

    英文原文Security of Computer Network System

    Henny Jone

    Abstract:

    This paper discussed the secure and dependable problem about the computer network system. On some aspects: the importance of network security basic theory function and the method of solving a problem etc. Good views for solving the problem are put forward. I strengthen people’s consciousness on network security.

    Key words: Computer network Virtual private network Encryption techniques Firewall

    Introduction

    Along with the computer network technology development the network security and the reliability have become the question of common interest by all users. The people all hoped their own network system can move reliably not external intruder disturbance and destruction. Therefore solves the network security and the reliable problem carefully is a guarantee the network normal operation’s premise and safeguard.

    1 Importance of the network security

    With the informationization developing fast today the computer network obtained the widespread application but along with the network information transmission capacity growing faster some organizations and departments benefit the speedup with the service operation in the network while the data has also suffered to extent attack and destruction. The aggressor may intercept the information in the network steals the user’s password the database information also may tamper with the database content the forge user’s status denies own signature. And what is more the aggressor may delete the database content the destroy node releases computer virus and so on. This cause data security and own benefit have received the serious threat. According to American FBI US Federal Bureau of Investigation the network security creates the economic loss surpassesdollars every year.75 corporation report finance loss is because the computer system security problem creates. More than 50 safe thr

    展开全文
  • 毕业设计-五子棋的外文翻译,纯人工翻译,不是用机器翻译的,个人感觉很好,外文原文和中文译文均包括在内!总共大概有六千字左右,希望对大家有所帮助。
  • 视频点播服务的对等网络的理论模型 外文翻译 Theoretical Models for Video on Demand Services on Peer-to-Peer Networks
  • 这是一篇很有影响的关于协作定位的外文文献
  • 供毕业生毕业外文文献翻译使用,分析混合用途城市WiFi网络
  • 其为使用ASP创建WEB服务外文文献
  • 英文原文:Security of Computer Network System Henny Jone Abstract:This paper discussed the secure and dependable problem about the computer network system. On some aspects: the importance of network ...
  • 2005年-2008年一些国外期刊的优质无线传感器网络科研成果资料。
  • 2019独角兽企业重金招聘Python工程师标准>>> ...
  • 毕业设计 (论文 )外文资料翻译 院 系 计算机科学与工程学院 专 业 计算机科学与技术 学生姓名 班级学号 外文出处 Russ Basiura,Mike Batongbacal Professional ASP.NET Web Services Matt Weisfeld The Object-Orien...
  • 国内好多同学面对外文文献论文都有一个共同的槽点,那就是翻译的问题,好不容易找到了自己想要的外文文献,结果那长篇大论的专业术语看不懂,还需另找软件翻译,这确实太麻烦了。 图片来自于网络, 版权归原作者 ...
  • 几篇最新的IEEE上的外文资料,Sequential Monte Carlo localization in mobile sensor networks
  • 共享下关于.net的文献 本文描述了关于.net在网络发展中的作用
  • 基于NS2网络模拟平台的TCP/IP协议研究外文翻译 外文NS2
  • 毕业论文 计算机类 外文翻译 理解网络地址与解析出错信息
  • 这是一篇描述网络基本架构,能让你对网络有个基础的了解的外文文献,可以当做外文翻译使用。
  • android开发外文参考文献该栏目包含关于参考文献和android和外文的论文范文,免费教你怎么写android开发外文文献提供有关参考资料。审读科技期刊时发现,相当多的期刊在正文中标注参考文献的起讫序号时序号间的标志...
  • IMPACT内幕:华盛顿的模范教师评价体系
  • 文档介绍:附录A外文翻译-原文部分PHPLanguageBasicsActiveServerPages(PHP)isaproven,well-establishedtechnologyforbuildingdynamicWebapplications,...me...
  • java外文文献

    2014-06-18 22:42:42
    关于java的外文文献中英文对照都有

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 9,906
精华内容 3,962
关键字:

外文网