精华内容
下载资源
问答
  • 项目启动的时候链接数据库会出问题 这个是我的application.xml文件 [code] jdbc/mmp ... hibernate.query.factory_class=org.hibernate.hql.classic.ClassicQueryTranslatorFactory hibernate.show...
    项目启动的时候链接数据库会出问题
    
    这个是我的application.xml文件
    [code]
    <bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean">
    <property name="jndiName">
    <value>jdbc/mmp</value>
    </property>
    </bean>
    <bean id="HibernateConfig" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean">
    <property name="dataSource">
    <ref local="dataSource" />
    </property>
    <property name="hibernateProperties">
    <value>hibernate.query.factory_class=org.hibernate.hql.classic.ClassicQueryTranslatorFactory hibernate.show_sql=true</value>
    </property>
    <property name="mappingResources">
    <list>
    <value>com/travelsky/mmp/hibernate/TblAgent.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/Tbluser.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/Tblsystemlog.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/TblContractType.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/TblPid.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/Tblpopedom.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/TblContractSort.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/TblPidTotal.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/TblOffice.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/Tblpopedomofgroup.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/Tblpopedomoforganization.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/TblPidType.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/TblContract.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/Tblsysparameter.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/TblContractAgtDetail.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/Tblorganization.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/Tblpopedomgroup.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/Tblpopedomofuser.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/TblCountry.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/TblCity.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/TblMsg.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/TblQx.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/TblZtgrp.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/TblGdsBranch.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/TblGdsMain.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/TblAirportdetail.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/TblAirport.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/TblAirline.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/TblAirlinedetail.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/QryAgent.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/QryAgentOff.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/Tblsectionoffice.hbm.xml</value>
    <value>com/travelsky/mmp/hibernate/QryAgtRptNobycty.hbm.xml</value>
    </list>
    </property>
    </bean>

    [/code]
    HibernateConfig被下面所有的DAO用到,例如:

    [code]
    <bean id="TbluserDAO" class="com.travelsky.mmp.hibernate.TbluserDAO">
    <property name="sessionFactory">
    <ref bean="HibernateConfig"></ref>
    </property>
    </bean>
    <bean id="User" class="org.springframework.transaction.interceptor.TransactionProxyFactoryBean">
    <property name="transactionManager">
    <ref local="transactionManager" />
    </property>
    <property name="target">
    <bean class="com.travelsky.mmp.service.User.UserImpl">
    <property name="user">
    <ref local="TbluserDAO" />
    </property>
    <property name="tblorganization">
    <ref local="TblorganizationDAO" />
    </property>
    <property name="tblsectionoffice">
    <ref local="TblsectionofficeDAO" />
    </property>
    </bean>
    </property>
    <property name="transactionAttributes">
    <props>
    <prop key="find*">PROPAGATION_REQUIRED,readOnly</prop>
    <prop key="add*">PROPAGATION_REQUIRED</prop>
    <prop key="update*">PROPAGATION_REQUIRED</prop>
    <prop key="del*">PROPAGATION_REQUIRED</prop>
    </props>
    </property>
    </bean>[/code]
    我Websphere中的数据源已经测试通过,jdbc/mmp
    我用到的类包放到了项目下,不存在引用的问题。
    我用到的包如下
    [color=darkblue]activation.jar
    antlr-2.7.5H3.jar
    cglib-full-2.0.2.jar
    classes12.jar
    commons-attributes-api.jar
    commons-beanutils-1.6.1.jar
    commons-collections-2.1.1.jar
    commons-dbcp-1.2.1.jar
    ommons-digester.jar
    commons-discovery.jar
    commons-fileupload.jar
    commons-lang-1.0.1.jar
    commons-logging-api-1.0.2.jar
    commons-pool-1.2.jar
    commons-validator.jar
    concurrent-1.3.3.jar
    connector.jar
    dom4j-1.6.1.jar
    edtftpj-1.5.4.jar
    ehcache-1.1.jar
    hibernate3.jar
    jakarta-oro.jar
    jaxen-1.1-beta-6.jar
    jta.jar
    log4j-1.2.11.jar
    mail.jar
    odmg-3.0.jar
    ojdbc14.jar
    oscache-2.0.2.jar
    poi-2.5.1-final-20040804.jar
    public.jar
    quartz-1.6.0.jar
    spring.jar
    struts.jar
    Teradata.jar
    velocity-dep-1.3.1.jar
    velocity-tools-1.1.jar[/color]
    启动WebSpherer日志如下:
    [07-4-4 15:01:11:562 CST] 00000024 Environment I org.hibernate.cfg.Environment <clinit> Hibernate 3.0.5
    [07-4-4 15:01:11:640 CST] 00000024 Environment I org.hibernate.cfg.Environment <clinit> hibernate.properties not found
    [07-4-4 15:01:11:734 CST] 00000024 Environment I org.hibernate.cfg.Environment <clinit> using CGLIB reflection optimizer
    [07-4-4 15:01:11:796 CST] 00000024 Environment I org.hibernate.cfg.Environment <clinit> using JDK 1.4 java.sql.Timestamp handling
    [07-4-4 15:01:12:421 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.TblAgent -> TBL_AGENT
    [07-4-4 15:01:12:703 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.Tbluser -> TBLUSER
    [07-4-4 15:01:12:812 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.Tblsystemlog -> TBLSYSTEMLOG
    [07-4-4 15:01:12:921 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.TblContractType -> TBL_CONTRACT_TYPE
    [07-4-4 15:01:13:031 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.TblPid -> TBL_PID
    [07-4-4 15:01:13:125 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.Tblpopedom -> TBLPOPEDOM
    [07-4-4 15:01:13:218 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.TblContractSort -> TBL_CONTRACT_SORT
    [07-4-4 15:01:13:296 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.TblPidTotal -> TBL_PID_TOTAL
    [07-4-4 15:01:13:406 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.TblOffice -> TBL_OFFICE
    [07-4-4 15:01:13:515 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.Tblpopedomofgroup -> TBLPOPEDOMOFGROUP
    [07-4-4 15:01:13:593 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.Tblpopedomoforganization -> TBLPOPEDOMOFORGANIZATION
    [07-4-4 15:01:13:703 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.TblPidType -> TBL_PID_TYPE
    [07-4-4 15:01:13:796 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.TblContract -> TBL_CONTRACT
    [07-4-4 15:01:13:875 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.Tblsysparameter -> TBLSYSPARAMETER
    [07-4-4 15:01:13:968 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.TblContractAgtDetail -> TBL_CONTRACT_AGT_DETAIL
    [07-4-4 15:01:14:062 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.Tblorganization -> TBLORGANIZATION
    [07-4-4 15:01:14:156 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.Tblpopedomgroup -> TBLPOPEDOMGROUP
    [07-4-4 15:01:14:250 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.Tblpopedomofuser -> TBLPOPEDOMOFUSER
    [07-4-4 15:01:14:359 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.TblCountry -> TBL_COUNTRY
    [07-4-4 15:01:14:437 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.TblCity -> TBL_CITY
    [07-4-4 15:01:14:531 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.TblMsg -> TBL_MSG
    [07-4-4 15:01:14:625 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.TblQx -> TBL_QX
    [07-4-4 15:01:14:718 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.TblZtgrp -> TBL_ZTGRP
    [07-4-4 15:01:14:812 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.TblGdsBranch -> TBL_GDS_BRANCH
    [07-4-4 15:01:14:890 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.TblGdsMain -> TBL_GDS_MAIN
    [07-4-4 15:01:14:984 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.TblAirportdetail -> TBL_AIRPORTDETAIL
    [07-4-4 15:01:15:093 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.TblAirport -> TBL_AIRPORT
    [07-4-4 15:01:15:218 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.TblAirline -> TBL_AIRLINE
    [07-4-4 15:01:15:296 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.TblAirlinedetail -> TBL_AIRLINEDETAIL
    [07-4-4 15:01:15:390 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.QryAgent -> QRY_AGENT
    [07-4-4 15:01:15:515 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.QryAgentOff -> QRY_AGT_OFF
    [07-4-4 15:01:15:609 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.Tblsectionoffice -> TBLSECTIONOFFICE
    [07-4-4 15:01:15:703 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues Mapping class: com.travelsky.mmp.hibernate.QryAgtRptNobycty -> qry_agt_rpt_nobycty
    [07-4-4 15:01:15:781 CST] 00000024 LocalSessionF I org.springframework.orm.hibernate3.LocalSessionFactoryBean afterPropertiesSet Building new Hibernate SessionFactory
    [07-4-4 15:01:15:859 CST] 00000024 Configuration I org.hibernate.cfg.Configuration secondPassCompile processing extends queue
    [07-4-4 15:01:15:937 CST] 00000024 Configuration I org.hibernate.cfg.Configuration secondPassCompile processing collection mappings
    [07-4-4 15:01:16:000 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindCollectionSecondPass Mapping collection: com.travelsky.mmp.hibernate.TblAgent.tblPidTotals -> TBL_PID_TOTAL
    [07-4-4 15:01:16:078 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindCollectionSecondPass Mapping collection: com.travelsky.mmp.hibernate.TblAgent.tblOffices -> TBL_OFFICE
    [07-4-4 15:01:16:156 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindCollectionSecondPass Mapping collection: com.travelsky.mmp.hibernate.Tbluser.tblpopedomofusers -> TBLPOPEDOMOFUSER
    [07-4-4 15:01:16:234 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindCollectionSecondPass Mapping collection: com.travelsky.mmp.hibernate.TblContractType.tblContracts -> TBL_CONTRACT
    [07-4-4 15:01:16:312 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindCollectionSecondPass Mapping collection: com.travelsky.mmp.hibernate.Tblpopedom.tblpopedomofgroups -> TBLPOPEDOMOFGROUP
    [07-4-4 15:01:16:375 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindCollectionSecondPass Mapping collection: com.travelsky.mmp.hibernate.Tblpopedom.tblpopedomofusers -> TBLPOPEDOMOFUSER
    [07-4-4 15:01:16:453 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindCollectionSecondPass Mapping collection: com.travelsky.mmp.hibernate.TblContractSort.tblContractTypes -> TBL_CONTRACT_TYPE
    [07-4-4 15:01:16:531 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindCollectionSecondPass Mapping collection: com.travelsky.mmp.hibernate.TblContractSort.tblContracts -> TBL_CONTRACT
    [07-4-4 15:01:16:609 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindCollectionSecondPass Mapping collection: com.travelsky.mmp.hibernate.TblOffice.tblPids -> TBL_PID
    [07-4-4 15:01:16:687 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindCollectionSecondPass Mapping collection: com.travelsky.mmp.hibernate.TblPidType.tblPids -> TBL_PID
    [07-4-4 15:01:16:750 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindCollectionSecondPass Mapping collection: com.travelsky.mmp.hibernate.TblPidType.tblContractAgtDetails -> TBL_CONTRACT_AGT_DETAIL
    [07-4-4 15:01:16:828 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindCollectionSecondPass Mapping collection: com.travelsky.mmp.hibernate.TblPidType.tblPidTotals -> TBL_PID_TOTAL
    [07-4-4 15:01:16:906 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindCollectionSecondPass Mapping collection: com.travelsky.mmp.hibernate.TblContract.tblContractAgtDetails -> TBL_CONTRACT_AGT_DETAIL
    [07-4-4 15:01:16:984 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindCollectionSecondPass Mapping collection: com.travelsky.mmp.hibernate.Tblorganization.tblusers -> TBLUSER
    [07-4-4 15:01:17:062 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindCollectionSecondPass Mapping collection: com.travelsky.mmp.hibernate.Tblorganization.tblpopedomoforganizations -> TBLPOPEDOMOFORGANIZATION
    [07-4-4 15:01:17:156 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindCollectionSecondPass Mapping collection: com.travelsky.mmp.hibernate.Tblorganization.tblAgents -> TBL_AGENT
    [07-4-4 15:01:17:250 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindCollectionSecondPass Mapping collection: com.travelsky.mmp.hibernate.Tblorganization.tblsectionoffice -> TBLSECTIONOFFICE
    [07-4-4 15:01:17:343 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindCollectionSecondPass Mapping collection: com.travelsky.mmp.hibernate.Tblpopedomgroup.tblpopedomofgroups -> TBLPOPEDOMOFGROUP
    [07-4-4 15:01:17:421 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindCollectionSecondPass Mapping collection: com.travelsky.mmp.hibernate.Tblpopedomgroup.tblpopedomoforganizations -> TBLPOPEDOMOFORGANIZATION
    [07-4-4 15:01:17:500 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindCollectionSecondPass Mapping collection: com.travelsky.mmp.hibernate.TblGdsMain.tblGdsBranchs -> TBL_GDS_BRANCH
    [07-4-4 15:01:17:578 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindCollectionSecondPass Mapping collection: com.travelsky.mmp.hibernate.TblAirport.tblAirportdetails -> TBL_AIRPORTDETAIL
    [07-4-4 15:01:17:656 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindCollectionSecondPass Mapping collection: com.travelsky.mmp.hibernate.TblAirline.tblAirlinedetails -> TBL_AIRLINEDETAIL
    [07-4-4 15:01:17:734 CST] 00000024 HbmBinder I org.hibernate.cfg.HbmBinder bindCollectionSecondPass Mapping collection: com.travelsky.mmp.hibernate.Tblsectionoffice.tbluser -> TBLUSER
    [07-4-4 15:01:17:828 CST] 00000024 Configuration I org.hibernate.cfg.Configuration secondPassCompile processing association property references
    [07-4-4 15:01:17:906 CST] 00000024 Configuration I org.hibernate.cfg.Configuration secondPassCompile processing foreign key constraints
    [07-4-4 15:01:18:453 CST] 00000024 ConnectionPro I org.hibernate.connection.ConnectionProviderFactory newConnectionProvider Initializing connection provider: org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider
    [color=red][07-4-4 15:01:18:656 CST] 00000024 JDBCException W org.hibernate.util.JDBCExceptionReporter logExceptions SQL Error: 17433, SQLState: null
    [07-4-4 15:01:18:750 CST] 00000024 JDBCException E org.hibernate.util.JDBCExceptionReporter logExceptions 调用中无效的参数DSRA0010E: SQL 状态 = null,错误代码 = 17,433
    [07-4-4 15:01:18:828 CST] 00000024 SettingsFacto W org.hibernate.cfg.SettingsFactory buildSettings Could not obtain connection metadata
    java.sql.SQLException: 调用中无效的参数DSRA0010E: SQL 状态 = null,错误代码 = 17,433[/color]
    at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
    at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:179)
    at oracle.jdbc.dbaccess.DBError.check_error(DBError.java:1160)
    at oracle.jdbc.ttc7.TTC7Protocol.logon(TTC7Protocol.java:183)
    at oracle.jdbc.driver.OracleConnection.<init>(OracleConnection.java:346)
    at oracle.jdbc.driver.OracleDriver.getConnectionInstance(OracleDriver.java:468)
    at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:314)
    at java.sql.DriverManager.getConnection(DriverManager.java:562)
    at java.sql.DriverManager.getConnection(DriverManager.java:155)
    at oracle.jdbc.pool.OracleDataSource.getConnection(OracleDataSource.java:169)
    at oracle.jdbc.pool.OracleConnectionPoolDataSource.getPhysicalConnection(OracleConnectionPoolDataSource.java:149)
    at oracle.jdbc.pool.OracleConnectionPoolDataSource.getPooledConnection(OracleConnectionPoolDataSource.java:95)
    at oracle.jdbc.pool.OracleConnectionPoolDataSource.getPooledConnection(OracleConnectionPoolDataSource.java:63)
    at com.ibm.ws.rsadapter.spi.InternalGenericDataStoreHelper$1.run(InternalGenericDataStoreHelper.java:897)
    at com.ibm.ws.security.util.AccessController.doPrivileged(AccessController.java:118)
    at com.ibm.ws.rsadapter.spi.InternalGenericDataStoreHelper.getPooledConnection(InternalGenericDataStoreHelper.java:892)
    at com.ibm.ws.rsadapter.spi.WSRdbDataSource.getPooledConnection(WSRdbDataSource.java:1180)
    at com.ibm.ws.rsadapter.spi.WSManagedConnectionFactoryImpl.createManagedConnection(WSManagedConnectionFactoryImpl.java:1047)
    at com.ibm.ejs.j2c.FreePool.createManagedConnectionWithMCWrapper(FreePool.java:1750)
    at com.ibm.ejs.j2c.FreePool.createOrWaitForConnection(FreePool.java:1517)
    at com.ibm.ejs.j2c.PoolManager.reserve(PoolManager.java:2141)
    at com.ibm.ejs.j2c.ConnectionManager.allocateMCWrapper(ConnectionManager.java:843)
    at com.ibm.ejs.j2c.ConnectionManager.allocateConnection(ConnectionManager.java:582)
    at com.ibm.ws.rsadapter.jdbc.WSJdbcDataSource.getConnection(WSJdbcDataSource.java:431)
    at com.ibm.ws.rsadapter.jdbc.WSJdbcDataSource.getConnection(WSJdbcDataSource.java:400)
    at org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider.getConnection(LocalDataSourceConnectionProvider.java:80)
    at org.hibernate.cfg.SettingsFactory.buildSettings(SettingsFactory.java:72)
    at org.hibernate.cfg.Configuration.buildSettings(Configuration.java:1463)
    at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1004)
    at org.springframework.orm.hibernate3.LocalSessionFactoryBean.newSessionFactory(LocalSessionFactoryBean.java:800)
    at org.springframework.orm.hibernate3.LocalSessionFactoryBean.afterPropertiesSet(LocalSessionFactoryBean.java:726)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1059)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:363)
    at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:226)
    at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:147)
    at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:269)
    at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:320)
    at org.springframework.web.context.support.AbstractRefreshableWebApplicationContext.refresh(AbstractRefreshableWebApplicationContext.java:134)
    at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:246)
    at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:184)
    at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:49)
    at com.ibm.ws.wswebcontainer.webapp.WebApp.notifyServletContextCreated(WebApp.java:605)
    at com.ibm.ws.webcontainer.webapp.WebApp.commonInitializationFinish(WebApp.java:265)
    at com.ibm.ws.wswebcontainer.webapp.WebApp.initialize(WebApp.java:271)
    at com.ibm.ws.wswebcontainer.webapp.WebGroup.addWebApplication(WebGroup.java:88)
    at com.ibm.ws.wswebcontainer.VirtualHost.addWebApplication(VirtualHost.java:157)
    at com.ibm.ws.wswebcontainer.WebContainer.addWebApp(WebContainer.java:653)
    at com.ibm.ws.wswebcontainer.WebContainer.addWebApplication(WebContainer.java:606)
    at com.ibm.ws.webcontainer.component.WebContainerImpl.install(WebContainerImpl.java:333)
    at com.ibm.ws.webcontainer.component.WebContainerImpl.start(WebContainerImpl.java:549)
    at com.ibm.ws.runtime.component.ApplicationMgrImpl.start(ApplicationMgrImpl.java:1295)
    at com.ibm.ws.runtime.component.DeployedApplicationImpl.fireDeployedObjectStart(DeployedApplicationImpl.java:1129)
    at com.ibm.ws.runtime.component.DeployedModuleImpl.start(DeployedModuleImpl.java:567)
    at com.ibm.ws.runtime.component.DeployedApplicationImpl.start(DeployedApplicationImpl.java:814)
    at com.ibm.ws.runtime.component.ApplicationMgrImpl.startApplication(ApplicationMgrImpl.java:948)
    at com.ibm.ws.runtime.component.ApplicationMgrImpl$1.run(ApplicationMgrImpl.java:1478)
    at com.ibm.ws.security.auth.ContextManagerImpl.runAs(ContextManagerImpl.java:3731)
    at com.ibm.ws.security.auth.ContextManagerImpl.runAsSystem(ContextManagerImpl.java:3813)
    at com.ibm.ws.security.core.SecurityContext.runAsSystem(SecurityContext.java:245)
    at com.ibm.ws.runtime.component.ApplicationMgrImpl.startApplication(ApplicationMgrImpl.java:1483)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:615)
    at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:62)
    at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:615)
    at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:265)
    at javax.management.modelmbean.RequiredModelMBean.invokeMethod(RequiredModelMBean.java:1089)
    at javax.management.modelmbean.RequiredModelMBean.invoke(RequiredModelMBean.java:971)
    at com.sun.jmx.mbeanserver.DynamicMetaDataImpl.invoke(DynamicMetaDataImpl.java:231)
    at com.sun.jmx.mbeanserver.MetaDataImpl.invoke(MetaDataImpl.java:238)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:833)
    at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:802)
    at com.ibm.ws.management.AdminServiceImpl$1.run(AdminServiceImpl.java:1055)
    at com.ibm.ws.security.util.AccessController.doPrivileged(AccessController.java:118)
    at com.ibm.ws.management.AdminServiceImpl.invoke(AdminServiceImpl.java:948)
    at com.ibm.ws.management.commands.AdminServiceCommands$InvokeCmd.execute(AdminServiceCommands.java:251)
    at com.ibm.ws.console.core.mbean.MBeanHelper.invoke(MBeanHelper.java:239)
    at com.ibm.ws.console.appdeployment.ApplicationDeploymentCollectionAction.execute(ApplicationDeploymentCollectionAction.java:536)
    at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:484)
    at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:274)
    at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1486)
    at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:528)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:763)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
    at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:966)
    at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:907)
    at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:118)
    at com.ibm.ws.webcontainer.filter.WebAppFilterChain._doFilter(WebAppFilterChain.java:87)
    at com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:696)
    at com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:641)
    at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:475)
    at com.ibm.ws.wswebcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:463)
    at com.ibm.ws.webcontainer.webapp.WebAppRequestDispatcher.forward(WebAppRequestDispatcher.java:308)
    at org.apache.struts.action.RequestProcessor.doForward(RequestProcessor.java:1070)
    at org.apache.struts.tiles.TilesRequestProcessor.doForward(TilesRequestProcessor.java:273)
    at org.apache.struts.action.RequestProcessor.processForwardConfig(RequestProcessor.java:455)
    at org.apache.struts.tiles.TilesRequestProcessor.processForwardConfig(TilesRequestProcessor.java:319)
    at com.ibm.isclite.container.controller.InformationController.processForwardConfig(InformationController.java:159)
    at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:279)
    at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1486)
    at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:528)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:763)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
    at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:966)
    at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:907)
    at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:145)
    at com.ibm.ws.console.core.servlet.WSCUrlFilter.continueStoringTaskState(WSCUrlFilter.java:371)
    at com.ibm.ws.console.core.servlet.WSCUrlFilter.doFilter(WSCUrlFilter.java:229)
    at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java:190)
    at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:130)
    at com.ibm.ws.webcontainer.filter.WebAppFilterChain._doFilter(WebAppFilterChain.java:87)
    at com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:696)
    at com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:641)
    at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:475)
    at com.ibm.ws.wswebcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:463)
    at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:92)
    at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:744)
    at com.ibm.ws.wswebcontainer.WebContainer.handleRequest(WebContainer.java:1425)
    at com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:92)
    at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:465)
    at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleNewInformation(HttpInboundLink.java:394)
    at com.ibm.ws.http.channel.inbound.impl.HttpICLReadCallback.complete(HttpICLReadCallback.java:102)
    at com.ibm.ws.ssl.channel.impl.SSLReadServiceContext$SSLReadCompletedCallback.complete(SSLReadServiceContext.java:1812)
    at com.ibm.ws.tcp.channel.impl.AioReadCompletionListener.futureCompleted(AioReadCompletionListener.java:152)
    at com.ibm.io.async.AbstractAsyncFuture.invokeCallback(AbstractAsyncFuture.java:213)
    at com.ibm.io.async.AbstractAsyncFuture.fireCompletionActions(AbstractAsyncFuture.java:195)
    at com.ibm.io.async.AsyncFuture.completed(AsyncFuture.java:136)
    at com.ibm.io.async.ResultHandler.complete(ResultHandler.java:193)
    at com.ibm.io.async.ResultHandler.runEventProcessingLoop(ResultHandler.java:725)
    at com.ibm.io.async.ResultHandler$2.run(ResultHandler.java:847)
    at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1498)

    [07-4-4 15:01:18:937 CST] 00000024 [color=red]DefaultListab I org.springframework.beans.factory.support.AbstractBeanFactory destroySingletons Destroying singletons in factory {org.springframework.beans.factory.support.DefaultListableBeanFactory defining beans [dataSource,HibernateConfig,transactionManager,TblsystemlogDAO,TblsectionofficeDAO,SectionOffice,TblpopedomoforganizationDAO,AssginOrganization,AssginUser,PopedomInterFace,SystemPara,TblsysparameterDAO,TblpopedomDAO,Popedom,TblpopedomofuserDAO,TblpopedomgroupDAO,PopedomGroup,TblorganizationDAO,Organization,PopedomOfGroup,TbluserDAO,User,TblpopedomofgroupDAO,TblContractSortDAO,contractSort,TblContractTypeDAO,contractType,TblContractDAO,contract,TblContractAgtDetailDAO,contractAgtDetail,TblAgentDAO,agent,TblOfficeDAO,office,TblPidDAO,pid,TblPidTypeDAO,pidType,TblPidTotalDAO,pidTotal,TblCountryDAO,country,TblCityDAO,city,agtReportDAO,agtreport,TblMsgDAO,msg,TblQxDAO,msgqx,TblZtgrpDAO,ztgrp,TblGdsBranchDAO,gdsBranch,TblGdsMainDAO,gds,TblAirportdetailDAO,airportdetail,TblAirportDAO,airport,TblAirlineDAO,airline,TblAirlinedetailDAO,airlinedetail,dboracleconnect,dbTeradataConnect,TeraOfficeAccess,ReadAgentTemplateFile,QryAgentDAO,qryagent,QryAgentOffDAO,qryagentoff,QryAgtRptNobyctyDAO,qryAgtRptNobycty]; root of BeanFactory hierarchy}[/color][07-4-4 15:01:19:046 CST] 00000024 [color=red]ContextLoader E org.springframework.web.context.ContextLoader initWebApplicationContext Context initialization failed
    org.springframework.beans.factory.BeanCreationException: Error creating bean with name [/color]'[color=darkblue]HibernateConfig[/color]' [color=red]defined in ServletContext resource [/WEB-INF/classes/applicationContext.xml]: Initialization of bean failed; nested exception is org.hibernate.HibernateException: database product name cannot be null
    org.hibernate.HibernateException: database product name cannot be null[/color][/color] at org.hibernate.dialect.DialectFactory.determineDialect(DialectFactory.java:57)
    at org.hibernate.dialect.DialectFactory.buildDialect(DialectFactory.java:39)
    at org.hibernate.cfg.SettingsFactory.determineDialect(SettingsFactory.java:374)
    at org.hibernate.cfg.SettingsFactory.buildSettings(SettingsFactory.java:110)
    at org.hibernate.cfg.Configuration.buildSettings(Configuration.java:1463)
    at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1004)
    at org.springframework.orm.hibernate3.LocalSessionFactoryBean.newSessionFactory(LocalSessionFactoryBean.java:800)
    at org.springframework.orm.hibernate3.LocalSessionFactoryBean.afterPropertiesSet(LocalSessionFactoryBean.java:726)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1059)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:363)
    at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:226)
    at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:147)
    at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:269)
    at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:320)
    at org.springframework.web.context.support.AbstractRefreshableWebApplicationContext.refresh(AbstractRefreshableWebApplicationContext.java:134)
    at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:246)
    at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:184)
    at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:49)
    at com.ibm.ws.wswebcontainer.webapp.WebApp.notifyServletContextCreated(WebApp.java:605)
    at com.ibm.ws.webcontainer.webapp.WebApp.commonInitializationFinish(WebApp.java:265)
    at com.ibm.ws.wswebcontainer.webapp.WebApp.initialize(WebApp.java:271)
    at com.ibm.ws.wswebcontainer.webapp.WebGroup.addWebApplication(WebGroup.java:88)
    at com.ibm.ws.wswebcontainer.VirtualHost.addWebApplication(VirtualHost.java:157)
    at com.ibm.ws.wswebcontainer.WebContainer.addWebApp(WebContainer.java:653)
    at com.ibm.ws.wswebcontainer.WebContainer.addWebApplication(WebContainer.java:606)
    at com.ibm.ws.webcontainer.component.WebContainerImpl.install(WebContainerImpl.java:333)
    at com.ibm.ws.webcontainer.component.WebContainerImpl.start(WebContainerImpl.java:549)
    at com.ibm.ws.runtime.component.ApplicationMgrImpl.start(ApplicationMgrImpl.java:1295)
    at com.ibm.ws.runtime.component.DeployedApplicationImpl.fireDeployedObjectStart(DeployedApplicationImpl.java:1129)
    at com.ibm.ws.runtime.component.DeployedModuleImpl.start(DeployedModuleImpl.java:567)
    at com.ibm.ws.runtime.component.DeployedApplicationImpl.start(DeployedApplicationImpl.java:814)
    at com.ibm.ws.runtime.component.ApplicationMgrImpl.startApplication(ApplicationMgrImpl.java:948)
    at com.ibm.ws.runtime.component.ApplicationMgrImpl$1.run(ApplicationMgrImpl.java:1478)
    at com.ibm.ws.security.auth.ContextManagerImpl.runAs(ContextManagerImpl.java:3731)
    at com.ibm.ws.security.auth.ContextManagerImpl.runAsSystem(ContextManagerImpl.java:3813)
    at com.ibm.ws.security.core.SecurityContext.runAsSystem(SecurityContext.java:245)
    at com.ibm.ws.runtime.component.ApplicationMgrImpl.startApplication(ApplicationMgrImpl.java:1483)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:615)
    at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:62)
    at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:615)
    at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:265)
    at javax.management.modelmbean.RequiredModelMBean.invokeMethod(RequiredModelMBean.java:1089)
    at javax.management.modelmbean.RequiredModelMBean.invoke(RequiredModelMBean.java:971)
    at com.sun.jmx.mbeanserver.DynamicMetaDataImpl.invoke(DynamicMetaDataImpl.java:231)
    at com.sun.jmx.mbeanserver.MetaDataImpl.invoke(MetaDataImpl.java:238)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:833)
    at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:802)
    at com.ibm.ws.management.AdminServiceImpl$1.run(AdminServiceImpl.java:1055)
    at com.ibm.ws.security.util.AccessController.doPrivileged(AccessController.java:118)
    at com.ibm.ws.management.AdminServiceImpl.invoke(AdminServiceImpl.java:948)
    at com.ibm.ws.management.commands.AdminServiceCommands$InvokeCmd.execute(AdminServiceCommands.java:251)
    at com.ibm.ws.console.core.mbean.MBeanHelper.invoke(MBeanHelper.java:239)
    at com.ibm.ws.console.appdeployment.ApplicationDeploymentCollectionAction.execute(ApplicationDeploymentCollectionAction.java:536)
    at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:484)
    at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:274)
    at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1486)
    at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:528)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:763)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
    at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:966)
    at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:907)
    at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:118)
    at com.ibm.ws.webcontainer.filter.WebAppFilterChain._doFilter(WebAppFilterChain.java:87)
    at com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:696)
    at com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:641)
    at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:475)
    at com.ibm.ws.wswebcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:463)
    at com.ibm.ws.webcontainer.webapp.WebAppRequestDispatcher.forward(WebAppRequestDispatcher.java:308)
    at org.apache.struts.action.RequestProcessor.doForward(RequestProcessor.java:1070)
    at org.apache.struts.tiles.TilesRequestProcessor.doForward(TilesRequestProcessor.java:273)
    at org.apache.struts.action.RequestProcessor.processForwardConfig(RequestProcessor.java:455)
    at org.apache.struts.tiles.TilesRequestProcessor.processForwardConfig(TilesRequestProcessor.java:319)
    at com.ibm.isclite.container.controller.InformationController.processForwardConfig(InformationController.java:159)
    at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:279)
    at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1486)
    at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:528)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:763)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
    at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:966)
    at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:907)
    at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:145)
    at com.ibm.ws.console.core.servlet.WSCUrlFilter.continueStoringTaskState(WSCUrlFilter.java:371)
    at com.ibm.ws.console.core.servlet.WSCUrlFilter.doFilter(WSCUrlFilter.java:229)
    at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java:190)
    at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:130)
    at com.ibm.ws.webcontainer.filter.WebAppFilterChain._doFilter(WebAppFilterChain.java:87)
    at com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:696)
    at com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:641)
    at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:475)
    at com.ibm.ws.wswebcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:463)
    at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:92)
    at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:744)
    at com.ibm.ws.wswebcontainer.WebContainer.handleRequest(WebContainer.java:1425)
    at com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:92)
    at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:465)
    at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleNewInformation(HttpInboundLink.java:394)
    at com.ibm.ws.http.channel.inbound.impl.HttpICLReadCallback.complete(HttpICLReadCallback.java:102)
    at com.ibm.ws.ssl.channel.impl.SSLReadServiceContext$SSLReadCompletedCallback.complete(SSLReadServiceContext.java:1812)
    at com.ibm.ws.tcp.channel.impl.AioReadCompletionListener.futureCompleted(AioReadCompletionListener.java:152)
    at com.ibm.io.async.AbstractAsyncFuture.invokeCallback(AbstractAsyncFuture.java:213)
    at com.ibm.io.async.AbstractAsyncFuture.fireCompletionActions(AbstractAsyncFuture.java:195)
    at com.ibm.io.async.AsyncFuture.completed(AsyncFuture.java:136)
    at com.ibm.io.async.ResultHandler.complete(ResultHandler.java:193)
    at com.ibm.io.async.ResultHandler.runEventProcessingLoop(ResultHandler.java:725)
    at com.ibm.io.async.ResultHandler$2.run(ResultHandler.java:847)
    at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1498)

    平时我们都是tomcat开发,现在移植到websphere,以前每半个月弄一次都没问题,这次出这个问题,该查的地方都看了,在下愚钝,还是找不到错误在哪,拍拖各位

    [b]
    展开全文
  • Uncaught exception in server handlerjavax.net.ssl.SSLHandshakeException: [Security:090476]Invalid/unknown SSL header was received from peer K8S-T03 - 192.168.149.13 during SSL handshake.>...

    <Aug 5, 2021 8:59:04 PM> <WARNING> <Uncaught exception in server handlerjavax.net.ssl.SSLHandshakeException: [Security:090476]Invalid/unknown SSL header was received from peer K8S-T03 - 192.168.149.13 during SSL handshake.>
    javax.net.ssl.SSLHandshakeException: [Security:090476]Invalid/unknown SSL header was received from peer K8S-T03 - 192.168.149.13 during SSL handshake.
            at com.certicom.tls.interfaceimpl.TLSConnectionImpl.fireException(Unknown Source)
            at com.certicom.tls.interfaceimpl.TLSConnectionImpl.fireAlertSent(Unknown Source)
            at com.certicom.tls.record.ReadHandler.fireAlert(Unknown Source)
            at com.certicom.tls.record.ReadHandler.getProtocolVersion(Unknown Source)
            at com.certicom.tls.record.ReadHandler.checkVersion(Unknown Source)
            at com.certicom.tls.record.ReadHandler.readRecord(Unknown Source)
            at com.certicom.tls.record.ReadHandler.readUntilHandshakeComplete(Unknown Source)
            at com.certicom.tls.interfaceimpl.TLSConnectionImpl.completeHandshake(Unknown Source)
            at com.certicom.tls.record.ReadHandler.read(Unknown Source)
            at com.certicom.io.InputSSLIOStreamWrapper.read(Unknown Source)
            at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
            at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
            at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
            at java.io.InputStreamReader.read(InputStreamReader.java:184)
            at java.io.BufferedReader.fill(BufferedReader.java:161)
            at java.io.BufferedReader.readLine(BufferedReader.java:324)
            at java.io.BufferedReader.readLine(BufferedReader.java:389)
            at weblogic.nodemanager.server.Handler.run(Handler.java:71)
            at java.lang.Thread.run(Thread.java:748)
     

    ==================

    /app/weblogic/Oracle/Middleware/wlserver_10.3/common/nodemanager/nodemanager.properties

    #SecureListener=true
    SecureListener=false
     

     

     

    展开全文
  • 记一次 【Unknown thread id: XXX】 的排查

    万次阅读 多人点赞 2019-03-15 23:58:10
    线上一个服务偶尔会产生【Unknown thread id: XXX】异常 异常堆栈 org.springframework.jdbc.UncategorizedSQLException: ### Error updating database. Cause: java.sql.SQLException: Unknown thread id: ...

    背景

    线上一个服务偶尔会产生【Unknown thread id: XXX】异常

    异常堆栈

    org.springframework.jdbc.UncategorizedSQLException: 
    ### Error updating database.  Cause: java.sql.SQLException: Unknown thread id: 64278282
    ### The error may involve com.xxx.xxx_xxxxx.xxx.dao.XxxDao.insert-Inline
    ### The error occurred while setting parameters
    ### SQL: INSERT INTO `t_xxx_xxx_xxx` (XXX,XXX,XXX)    VALUES (?, ?, ? )
    ### Cause: java.sql.SQLException: Unknown thread id: 64278282
    ; uncategorized SQLException for SQL []; SQL state [HY000]; error code [1094]; Unknown thread id: 64278282; nested exception is java.sql.SQLException: Unknown thread id: 64278282
    	at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:84)
    	at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
    	at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
    	at org.mybatis.spring.MyBatisExceptionTranslator.translateExceptionIfPossible(MyBatisExceptionTranslator.java:73)
    	at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:371)
    	at com.sun.proxy.$Proxy26.insert(Unknown Source)
    	at org.mybatis.spring.SqlSessionTemplate.insert(SqlSessionTemplate.java:240)
    	at org.apache.ibatis.binding.MapperMethod.execute(MapperMethod.java:52)
    	at org.apache.ibatis.binding.MapperProxy.invoke(MapperProxy.java:53)
    	at com.sun.proxy.$Proxy28.insert(Unknown Source)
    	// 省略了部分信息
    	at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1091)
    	at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:668)
    	at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1521)
    	at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1478)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    	at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
    	at java.lang.Thread.run(Thread.java:745)
    Caused by: java.sql.SQLException: Unknown thread id: 64278282
    	at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:959)
    	at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3870)
    	at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3806)
    	at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2470)
    	at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2617)
    	at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2546)
    	at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2504)
    	at com.mysql.jdbc.StatementImpl.executeInternal(StatementImpl.java:840)
    	at com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:740)
    	at com.mysql.jdbc.StatementImpl$CancelTask$1.run(StatementImpl.java:119)
    

    初步总结定位

    异常都集中在一个数据库,产生异常的语句并不同,操作的表也不同,insert、update、select都发现过这种异常。
    可确认的是部分异常时间点数据库确实存在高负载。只是不明白为何sql失败会得到这样的一种异常信息呢?

    问题

    1. 是什么:这个异常是什么类型的异常
    2. 如何产生:这个异常是怎么产生的
    3. 如何解决:如何解决这个异常

    调研

    围绕这以上三个问题,我们逐个解析。

    是什么

    首先我们通过两个维度来解析一下这个异常,分别是数据库维度和应用维度。

    数据库层面异常解析

    由于我们用的是mysql数据库,那么mysql到底是如果定义和描述异常的。可参考:mysql官网的描述
    我们文中所谓的mysql异常在mysql层面叫做error(下文称:mysql错误),我们从官网摘取了如下信息:

    mysql错误分类

    按照错误产生来源可以分为两种

    1. 服务端错误:启动或者关闭进程(需要dba干预的),执行sql发生问题(dba可干预,也很大可能需要反馈给客户端的)。错误码范围:[1000,1999],目前有6、7百个
    2. 客户端错误:通常都是与服务端的通信问题产生的异常(例:主机连接不通)。错误码范围:[2000,+],目前比较少,只有几十个。
    mysql错误结构

    mysql错误信息由错误码、SQLSTATE值以及错误描述三部分组成。
    对照一下我们上文异常堆栈中的信息:

    SQL state [HY000]; error code [1094]; Unknown thread id: 64278282

    1. 错误码(符号标识)
      纯数字组成(例:1094)、每个错误码都有一个对应的符号标识(例如:ER_NO_SUCH_THREAD),这些错误码是mysql自己定义的,不适用于其他数据库。
    2. SQLSTATE值
      一共由5个字符组成,值取自ansi sql和odbc,比数字错误代码更标准化,值的前两个字符表示错误类型
      • 00标识成功
      • 01标识警告
      • 02标识未找到
      • 03[+]标识异常
      • 对于服务端发生的错误,并不是所有的mysql错误码都能对应上一个SQLSTATE值,这个时候就用【HY000】,表示常规错误。
      • 对于客户端发生的错误,都用【HY000】表示。
    3. 错误描述
      异常简要描述(例:Unknown thread id: %lu)。
    小结

    经过这个小科普我们可以得出结论,我们得到的这个异常是个服务端异常。这个异常没有对应的SQLSTATE值,这是一个mysql特有的自定义异常。能得到的有效描述信息仅仅就是字面的意思(未知的或者说是不能识别的线程id)。

    应用框架层面异常解析

    通过异常堆栈我们可以发现:
    栈顶异常是:org.springframework.jdbc.UncategorizedSQLException
    栈底异常是:java.sql.SQLException

    UncategorizedSQLException异常解析

    spring有几个主要模块,IOC、AOP、数据访问和集成、WEB以及远程操作、测试框架等。数据访问和集成是spring框架中比较核心的一部分,spring在数据访问和继承方面的一个体现就是spring框架统一了数据访问异常体系,对于常见的数据访问操作异常进行了包装(可以参见org.springframework.dao和org.springframework.transaction两个包下的异常类)。
    那么UncategorizedSQLException指代的是什么类型的异常呢?

    1. 通过类所处位置确定异常范围
      UncategorizedSQLException类位于org.springframework.jdbc包下,并没有位于dao或者transaction包下,这也就说明他是spring框架对于jdbc实现的一个特定异常。
    2. 通过类名也可见一斑
      UncategorizedSQLException以SQLException结尾,说明应该和java.sql.SQLException有关。
    3. 分析该类源码(为了节省篇幅,去掉了部分注释和代码)
    /**
     * Exception thrown when we can't classify a SQLException into
     * one of our generic data access exceptions.
     */
    public class UncategorizedSQLException extends UncategorizedDataAccessException {
    
    	/** SQL that led to the problem */
    	private final String sql;
    
    	public UncategorizedSQLException(String task, String sql, SQLException ex) {
    		super(task + "; uncategorized SQLException for SQL [" + sql + "]; SQL state [" +
    				ex.getSQLState() + "]; error code [" + ex.getErrorCode() + "]; " + ex.getMessage(), ex);
    		this.sql = sql;
    	}
    
    	/**
    	 * Return the underlying SQLException.
    	 */
    	public SQLException getSQLException() {
    		return (SQLException) getCause();
    	}
    
    	/**
    	 * Return the SQL that led to the problem.
    	 */
    	public String getSql() {
    		return this.sql;
    	}
    }
    

    • 通过该类的注释可以得到如下信息:

    当我们无法将SQLException分类为一个通用数据访问异常时,就会抛出这个异常。

    • 还可以通过UncategorizedSQLException的父类UncategorizedDataAccessException的注释我们得到如下信息:

    当我们仅仅知道是底层(译者注:这里所谓的底层指的底层api,JDBC等)出了问题,没有更细化的信息的时候就可以使用这个异常。举了个例子:jdbc抛出的SQLException。

    • 这个异常类定义了一个字符串类型的sql属性
    • 这个异常类还定义了一个getSQLException方法,返回一个SQLException对象
    SQLException

    1、该类位于java.sql包下,属于jdk的类。
    2、java.sql包是干什么的?
    这个包我们日常开发可能很少关注和留意,但是另外一个概念大家肯定都不陌生,那就是JDBC。JDBC是java定义的一套进行数据库操作的规范,是一套api,这套api里面既有接口也有普通的类。jdbc的所有接口和类都在java.sql包下。java.sql包就是jdbc所在。java.sql包下大部分是接口,需要各个数据库厂商进行实现。

    • jdbc的接口:Driver、Connection、Statement等。
    • jdbc的普通类:SQLException就是其中之一。还有 DriverManager、Date、JDBCType(枚举)等。
    小结

    到这一步,从应用框架层面确定了这个底层异常是jdbc的一个SQLException。没有被mybatis包装(因为我们也用到了mybatis框架,但是异常堆栈中并没有发现有mybatis框架定义的异常),被spring包装成了一个UncategorizedSQLException抛出。


    经过以上两种角度的分析,同时以点带面的科普了相关知识点,还都是理论和现象之间的互相印证而已,主要解答了这个异常是什么,接下要分析这个异常是如何产生的


    如何产生

    通过场景分析总结的时候,我们提到应该是由于当时(发生异常的时间点)数据库压力较大,导致了sql执行失败,可是为什么是这样一种异常,而不是更具象化的异常呢,为什么不是超时异常和获取不到连接异常呢?
    从度娘和谷歌搜这个异常(unknown thread id)几乎得不到什么有价值的参考信息,是否这个异常是个特殊场景或者具有公司特色的异常。
    于是我们可能还是需要从源码层面找一找答案。

    再来观察异常堆栈:

    Caused by: java.sql.SQLException: Unknown thread id: 71436599
    	at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:959)
    	at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3870)
    	at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3806)
    	at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2470)
    	at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2617)
    	at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2546)
    	at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2504)
    	at com.mysql.jdbc.StatementImpl.executeInternal(StatementImpl.java:840)
    	at com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:740)
    	at com.mysql.jdbc.StatementImpl$CancelTask$1.run(StatementImpl.java:119)
    
    StatementImpl$CancelTask部分源码解析

    异常堆栈的栈底很清楚的说明了调用com.mysql.jdbc.StatementImpl类的内部类CancelTask的run方法的时候产生了这个异常。代码行号是119。我们通过行号路由的相关代码片段:

    public class StatementImpl implements Statement {
    
        //此处省略了很多代码
    
        protected boolean wasCancelled = false; // 标识是否被取消
        protected boolean wasCancelledByTimeout = false; // 标识是否因超时被取消
        
        class CancelTask extends TimerTask {
    
            //此处省略了很多代码
    
            
            SQLException caughtWhileCancelling = null;// 接收并存储取消过程中发生的SQLException类型异常
            StatementImpl toCancel;// 指向要取消的StatementImpl实例
    
            CancelTask(StatementImpl cancellee) throws SQLException {
                //此处省略了很多代码
            }
    
            @Override
            public void run() {
    
                Thread cancelThread = new Thread() {
    
                    @Override
                    public void run() {
    
                        Connection cancelConn = null;
                        java.sql.Statement cancelStmt = null;
    
                        try {
                            if (StatementImpl.this.connection.getQueryTimeoutKillsConnection()) {
                                //此处省略了很多代码
                            } else {
                                synchronized (StatementImpl.this.cancelTimeoutMutex) {
                                    if (CancelTask.this.origConnURL.equals(StatementImpl.this.connection.getURL())) {
                                        //All's fine
                                        cancelConn = StatementImpl.this.connection.duplicate();
                                        cancelStmt = cancelConn.createStatement();
    119                                 cancelStmt.execute("KILL QUERY " + CancelTask.this.connectionId);
                                    } else {
                                        //此处省略了很多代码
                                    }
    /* 
     * 执行到这里说明外部类实例的Statement任务因为超时被成功取消,所以以下的两个标识都设置为true 
     */                              CancelTask.this.toCancel.wasCancelled = true;
                                    CancelTask.this.toCancel.wasCancelledByTimeout = true;
                                }
                            }
                        } catch (SQLException sqlEx) {
                            /*
                             * 如果捕捉到的是SQLException
                             * 那么赋给自己的实例变量caughtWhileCancelling(并没有向上抛出)。
                            */
                            CancelTask.this.caughtWhileCancelling = sqlEx;
                        } catch (NullPointerException npe) {
                            
                        } finally {
                            //此处省略了很多代码
                        }
                    }
                };
    
                cancelThread.start();
            }
        }
    
        //此处省略了很多代码
    }
    

    代码段中119行号对应的就是产生异常的代码行。
    到这里有几个点值得讲一讲。

    1. StatementImpl类是干什么的?
      他是jdbc的Statement接口的实现,通过jdbc来进行绝大部分数据操作,都是通过Statement的executeXXX方法进行的。
    2. CancelTask是干什么的?
      他是StatementImpl的内部类,如果设置了Statement超时机制,那么该类的作用就是在Statement执行超时的时候取消掉这个Statement任务。
    3. KILL QUERY 是干什么的?
      这个命令在我们的业务代码里几乎用不到,但是对于DBA来讲,应该再熟悉不过。
      当某个sql进行了全表扫描;某个ddl激发了元数据锁,导致后续所有请求阻塞;这些时候都需要DBA进行及时干预,杀掉这些危险进程,这个时候用到的就是kill命令。
      KILL 命令有两种使用方法,可参考:mysql官网的描述
      • KILL xxid: 杀掉xxid对应的sql线程,同时杀掉其关联的连接。
      • KILL QUERY xxid: 仅仅杀掉xxid对应的sql线程,不会杀掉对应的连接。

    根据以上分析就说明我们的业务层面配置了statement超时,而且也确实执行超时了,也触发了取消任务去kill这个超时的任务。但是取消的过程中产生了异常,抛出到了我们业务层面上。
    这时又产生了几个疑问。


    1. 取消任务是一个独立的线程,和业务主线程是隔离的,怎么取消失败的异常会抛到业务主线程里(代码层面并没有向外抛出)?
    2. 这个超时的sql到底取消了没有?

    这就需要继续观察源码,这次我们要从业务sql执行的角度来观察。我们常规的业务sql语句最终都会交给Statement接口的execute方法们(该方法有几个重载方法)来执行。这些方法在StatementImpl类和PrepareStatement类中的实现都是调用各自类中的私有方法executeInternal来执行,而executeInternal方法在两者中的实现主体流程大体相同,我们就以StatementImpl中的源码为例进行分析。

    StatementImpl的executeInternal方法中的超时逻辑解析
    private boolean executeInternal(String sql, boolean returnGeneratedKeys) throws SQLException {
        MySQLConnection locallyScopedConn = checkClosed();
    
        synchronized (locallyScopedConn.getConnectionMutex()) {
    
            // 此处省略了部分代码
    
            try {
                
                // 此处省略了部分代码
    
                if (useServerFetch()) { 
                    //连接参数里如果设置了useServerFetch=true则会执行这部分逻辑,最终还是会路由到PrepareStatement的executeInternal的方法中
                    rs = createResultSetUsingServerFetch(sql);
                } else {
                    
                    CancelTask timeoutTask = null; // 声明一个取消任务变量
    
                    String oldCatalog = null;
    
                    try {
                        /* 如果enableQueryTimeouts配置为true(默认为true)
                         * 超时时间不等于0(此处的判断等同于大于0),这毫秒值是必须为大于等于0的数值,这个地方之所以没有用>0,是因为setTimeoutInMillis方法中做了校验,若设置一个小于0的数值会抛出异常。
                         * 校验数据库版本必须为5.0.0以上
                         */
                        if (locallyScopedConn.getEnableQueryTimeouts() && this.timeoutInMillis != 0 && locallyScopedConn.versionMeetsMinimum(5, 0, 0)) {
                            timeoutTask = new CancelTask(this); // 创建一个取消任务
                            /* 
                             * 把这个任务交给一个Timer调度器进行调度,timeoutInMillis毫秒后开始执行该取消任务(如果业务在timeoutInMillis的时间里没有执行完,就会被调度触发的取消任务取消掉)。
                             * 结合Timer和CancelTask的源码,我们可以发现,当Timer调度执行其队列任务时,会调用任务的run方法,那么当调用CancelTask的run方法的时候,会触发其内部取消线程的执行(cancelThread.start();)
                             */
                            locallyScopedConn.getCancelTimer().schedule(timeoutTask, this.timeoutInMillis); 
                        }
    
                        if (!locallyScopedConn.getCatalog().equals(this.currentCatalog)) {
                            oldCatalog = locallyScopedConn.getCatalog();
                            locallyScopedConn.setCatalog(this.currentCatalog);
                        }
    
                        // 此处省略部分代码
    
                        // 这一步才真正的执行sql语句,返回结果
                        rs = locallyScopedConn.execSQL(this, sql, this.maxRows, null, this.resultSetType, this.resultSetConcurrency,
                                createStreamingResultSet(), this.currentCatalog, cachedFields);
    
                        // 判断是否存在超时取消任务,配置了超时机制,那么就存在
                        if (timeoutTask != null) {
                            // 如果取消任务的caughtWhileCancelling变量不为空(在CancelTask源码分析过该变量的类型是SQLException)
                            if (timeoutTask.caughtWhileCancelling != null) {
    942(就是这儿)                   throw timeoutTask.caughtWhileCancelling; //抛出该SQLException异常
                            }
    
                            // 走到这一步说明,取消任务没有发生异常。有可能未执行;有可能取消成功;
                            // 无论是那种情况,都可任务已经sql已经返回结果了,取消任务没有必要存在了,所以取消掉这个调度任务
                            timeoutTask.cancel(); 
                            // 置为空,为了能被尽快回收。如果不置为空,就意味着外部类的StatementImpl实例一直引用这个取消任务,只要StatementImpl实例不销毁,那么这个timeoutTask就不被销毁。
                            timeoutTask = null; 
                        }
    
                        synchronized (this.cancelTimeoutMutex) {
                            if (this.wasCancelled) { // 如果自己的statement被成功取消
                                SQLException cause = null; // 定义个jdbc异常,用来指向具体子类
    
                                if (this.wasCancelledByTimeout) { // 如果是因为超时被取消
                                    cause = new MySQLTimeoutException(); // 生成一个mysql超时异常
                                } else {
                                    cause = new MySQLStatementCancelledException(); // 生成一个mysql取消异常
                                }
    
                                resetCancelledState(); // 重置wasCancelled和wasCancelledByTimeout为false
    
                                throw cause; // 抛出异常
                            }
                        }
                    } finally { // 在finally中做进一步的资源清理
                        
                        if (timeoutTask != null) {
                            timeoutTask.cancel(); // 取消掉任务
                            locallyScopedConn.getCancelTimer().purge(); // 从任务队列中移除掉被取消的任务
                        }
    
                        if (oldCatalog != null) {
                            locallyScopedConn.setCatalog(oldCatalog);
                        }
                    }
                }
    
                if (rs != null) {
                    // 此处省略了部分代码
                }
    
                return ((rs != null) && rs.reallyResult());
            } finally {
                locallyScopedConn.setReadInfoMsgEnabled(readInfoMsgState);
    
                this.statementExecuting.set(false);
            }
        }
    }
    

    通过上面的源码解读我们可以回答前文提到的两个问题。

    1. 取消任务是一个独立的线程,和业务主线程是隔离的,怎么取消失败的异常会抛到业务主线程里(代码层面并没有向外抛出)?

      • 首先CancelTask中捕捉到SQLException没有抛出,只是赋给了自己的一个实例变量caughtWhileCancelling。
      • 其次在executeInternal方法中,会检查CancelTask实例(timeoutTask)的实例变量caughtWhileCancelling是否为空,不为空,则抛出该异常。

      正因为这两点,取消任务的发生的异常才通过业务线程抛出了。

    2. 这个超时的sql到底取消了没有?
      因为取消过程中发生了异常,所以超时的sql并没有被取消,sql还是继续执行了,而是在sql执行成功之后,通过后置的校验,抛出了这个取消异常。

      • 抛出异常的时候,后续流程肯定会中断。
      • 无论业务sql是否处于事务当中,读操作都肯定执行成功了。
      • 如果业务sql处于事务当中的话,那么写操作可以被回滚。
      • 如果业务sql没有处在事务当中的话,那么写操作不会被回滚。

    经过以上两部分的源码解析,我们可以确定异常产生的背景是我们配置了statement超时机制,当主线程的statement执行超时,异步线程取消任务去kill超时的主线程的statement时发生了这个异常。但是这个取消异常只是暂存了起来,等到主线程statement执行完成后,才由后置的校验机制检测到,抛出。这里还需要待进一步探讨几个问题。

    1. 超时是如何设置的?
    2. 为什么取消会失败?

    超时设置

    本文所讨论的超时设置是基于spring + mybatis + jdbc 的架构来讲。鉴于关于这方面的知识在网络上已经找到介绍的非常详尽的文章,所以这里我们不详细展开,附上两个参考地址:
    深入理解JDBC超时机制(原文翻译)
    深入理解JDBC超时机制(英文原文)

    基于本文所论述内容的需求,从以上文章中摘取了部分图文。

    应用与数据库间的timeout层级

    timeout层级

    transaction timeout设置

    事务的超时时间只存在于高层框架层面,jdbc里没有这个概念。如果使用spring框架,那么可以通过xml或者注解的方式进行配置。
    1、针对部分方法生效(3秒超时)

    <tx:method name=“…” timeout=“3″/> 
    

    2、针对某个类或者类的某个方法生效 (3秒超时)

    @Transactional(timeout=3)
    

    事务超时针对的是整个事务的执行时间,这里面就不单单包括数据库的操作时间,其他的业务处理也算在内。事务时间=statement时间*n + 杂七杂八的时间。

    jdbc的statement timeout设置

    statement timeout用来限制statement的执行时长,timeout的值通过调用JDBC的java.sql.Statement.setQueryTimeout(int timeout) API进行设置。因我们使用mybatis框架,所以一般可以通过两种方式配置该超时时间。
    1、全局配置
    可以通过设置全局的defaultStatementTimeout进行配置,以下配置片段中设置了3秒超时。

    <configuration>
    	<settings>
    		<!-- 省略了其他配置 -->
    		<setting name="defaultStatementTimeout" value="3" />
    	</settings>
    	
    	<mappers>
    		<!-- 省略了其他配置 -->
    	</mappers>
    </configuration>
    

    2、单独配置
    可以在具体的mapper文件的指定语句上配置timeout。以下配置片段中为单个sql设置了1秒超时。

    <select id="getListFromGroupBy" resultType="java.util.Map" timeout="1">
         select c_name,c_code,d_name,d_code,count(*) from t_xxx_xxx group by c_name,c_code,d_name,d_code  order by rand() limit 100
    </select>
    

    MySQL JDBC Statement的QueryTimeout处理过程(5.0.8)

    mysql jdbc 5.0.8的处理流程

    jdbc的socket timeout设置

    我们目前所使用的jdbc的实现(mysql-connector-java-XXX.jar)底层是通过socket与数据库进行通信,不同的数据库厂商针对自己的不同数据库提供不同的jdbc实现。基于socket通信如果不设置超时时间,很有可能在出现网络问题时产生无限等待,最终耗尽系统资源。socket timeout 可以解释为传输(读写)超时,和其形影不离的还有一个 connect timeout,可以解释为建立连接超时。鉴于我们使用DBCP来配置数据库连接池,一般把参数配置写到prop文件中,再通过xml配置文件引用。
    1、properties配置
    设置了连接超时为1秒,读写超时为5秒

    jdbc.connectionProperties=connectTimeout=1000;socketTimeout=5000;useUnicode=true;characterEncoding=utf8[;**key=**val]
    

    2、xml配置

    <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" abstract="true">
        <!-- 省略了其他配置 -->
        <property name="connectionProperties" value="${jdbc.connectionProperties}"/>
    </bean>
    

    在mysql的jdbc实现中,这两个设置最终都通过ConnectionImpl(继承了ConnectionPropertiesImpl)传递给MysqlIO,MysqlIO封装了底层的socket操作。那么如果socket超时了会产生什么样的异常呢?
    socket timeout 异常堆栈

    org.springframework.dao.RecoverableDataAccessException: 
    ### Error querying database.  Cause: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
    
    The last packet successfully received from the server was 120,104 milliseconds ago.  The last packet sent successfully to the server was 120,099 milliseconds ago.
    ### The error may exist in mapper/filter/XxxMapper.xml
    ### The error may involve com.xxxx.xxxx_xxxx.filter.dao.XxxDao.findByXxxId-Inline
    ### The error occurred while setting parameters
    ### SQL: //此处略去
    ### Cause: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
    	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    	at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
    	at com.mysql.jdbc.Util.handleNewInstance(Util.java:404)
    	at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:983)
    	at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3457)
    	at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3357)
    	at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3797)
    	at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2470)
    	at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2617)
    	at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2550)
    	at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1861)
    	at com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:1192)
    	at sun.reflect.GeneratedMethodAccessor146.invoke(Unknown Source)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:483)
    	at com.mysql.jdbc.MultiHostConnectionProxy$JdbcInterfaceProxy.invoke(MultiHostConnectionProxy.java:91)
    	at com.sun.proxy.$Proxy74.execute(Unknown Source)
    	at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172)
    	at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172)
    	at org.apache.ibatis.executor.statement.PreparedStatementHandler.query(PreparedStatementHandler.java:62)
    	at org.apache.ibatis.executor.statement.RoutingStatementHandler.query(RoutingStatementHandler.java:78)
    	at sun.reflect.GeneratedMethodAccessor145.invoke(Unknown Source)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:483)
    	at org.apache.ibatis.plugin.Plugin.invoke(Plugin.java:63)
    	at com.sun.proxy.$Proxy70.query(Unknown Source)
    	at org.apache.ibatis.executor.ReuseExecutor.doQuery(ReuseExecutor.java:59)
    	at org.apache.ibatis.executor.BaseExecutor.queryFromDatabase(BaseExecutor.java:303)
    	at org.apache.ibatis.executor.BaseExecutor.query(BaseExecutor.java:154)
    	at org.apache.ibatis.executor.CachingExecutor.query(CachingExecutor.java:96)
    	at org.apache.ibatis.executor.CachingExecutor.query(CachingExecutor.java:82)
    	at sun.reflect.GeneratedMethodAccessor156.invoke(Unknown Source)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:483)
    	at org.apache.ibatis.plugin.Invocation.proceed(Invocation.java:49)
    	at org.apache.ibatis.plugin.Plugin.invoke(Plugin.java:61)
    	at com.sun.proxy.$Proxy69.query(Unknown Source)
    	at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:120)
    	at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:113)
    	at sun.reflect.GeneratedMethodAccessor141.invoke(Unknown Source)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:483)
    	at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:358)
    	... 104 more
    Caused by: java.net.SocketTimeoutException: Read timed out
    	at java.net.SocketInputStream.socketRead0(Native Method)
    	at java.net.SocketInputStream.read(SocketInputStream.java:150)
    	at java.net.SocketInputStream.read(SocketInputStream.java:121)
    	at com.mysql.jdbc.util.ReadAheadInputStream.fill(ReadAheadInputStream.java:100)
    	at com.mysql.jdbc.util.ReadAheadInputStream.readFromUnderlyingStreamIfNecessary(ReadAheadInputStream.java:143)
    	at com.mysql.jdbc.util.ReadAheadInputStream.read(ReadAheadInputStream.java:173)
    	at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:2946)
    	at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3367)
    	... 143 more
    
    
    1. 栈顶异常信息翻译一下
      The last packet successfully received from the server was 120,104 milliseconds ago. The last packet sent successfully to the server was 120,099 milliseconds ago.

    最后一次从服务端成功接收的数据包是120104毫秒(120秒多一点)之前,最后一次成功发送到服务端的数据包是120099毫秒(120秒多一点)之前。

    1. 栈底的Caused by: java.net.SocketTimeoutException: Read timed out已经很清楚的说明了引起异常的原因是读超时。
    超时设置总结

    给出一个生产系统的真实配置来说明一下各种超时参数该如何配置

    • connect/socket timeout
    connectTimeout=1000;socketTimeout=120000
    
    • statement timeout
    <setting name="defaultStatementTimeout" value="3" />
    
    • transaction timeout
    未设置
    

    该系统就是上文提到的一个核心业务系统,需要对外提供高性能的接口,同时也承担了定时数据刷新任务。(此处先不讨论拆分成两个系统的事情)。该系统所依赖的数据库集群每天凌晨都会有大量的etl数据写操作,届时数据库io压力相当大。

    1. 对外提供高性能接口,大部分操作可命中缓存,少部分请求会穿透到库,鉴于接口性能要求以及外部代理层的3秒超时设置,也就是说如果sql执行超过3秒就相当于请求失败。所以把statement timeout 设置为3秒。
    2. 数据刷新任务的自身逻辑就包含着从库中删除老数据,并从外部的nosql数据源抽取新数据写入到库中。读写操作自身就比较繁忙,若再和etl数据写入赶到一起的话很容易产生请求阻塞或连接断开的情况。socket timeout一定要配置的,但是鉴于是处于凌晨的定时任务,超时时间可以设置稍微长一些,可以减少失败批次的概率。所以设置了120秒(2分钟),这个值也是根据自身系统情况调整过几次之后定的值。
    3. 鉴于自身业务提供的高性能接口是只读服务、所以无需开启事务。定时刷新任务存在循环的大批量的写操作,且还存在高延迟的可能,单批写操作是针对单表,所以也必要开启小事务,更不可能采取一个大事务的方式来保证所有批次一起提交,系统层面采用小批次失败重试机制来保证成功率。

    上文 《socket timeout 异常堆栈》中的异常就是真实环境下的异常堆栈。到这里你是否有个疑问,statement设置了3秒超时,socket设置了2分钟超时,按理说应该是statement执行sql先超时,应该永远得到的是statement的超时异常,为什么会得到socket read timeout异常呢?

    这个可以这样理解,当socket已经存在阻塞的时候,我们的异步的statement取消任务同样也要经由socket把kill命令传输给数据库层面,这样的请求同样也存在被阻塞的可能,也就是说取消请求同样也可能得不到响应。即使取消请求很快得到了回应,因为主线程socket还在阻塞,得不到mysql线程被kill的消息,也无法继续往下执行,最终还是会因为超时异常而绕过后续的和取消任务有关的后置校验。当然如果主线程阻塞时间超过3秒,但不到2分钟,且statement取消线程获取到资源快速得到响应。这样的场景下我们得到的就是statement超时异常(MySQLTimeoutException)或者取消失败异常(SQLException)了。而在我们上文提到的两个场景中,我们得到都是取消失败异常(SQLException)。

    #####为什么取消会失败
    我们设置了statement超时,那么当statement执行时间超过设定值的时候,我们希望他能被kill掉,进而通过业务层面能够得到一个期待中的异常(MySQLTimeoutException),但是我们却得到了一个取消失败异常(SQLException),而且我们上文中也提到,这种情况下statement的sql是没有被取消掉的,还是正常执行的(执行用时肯定超过了设定的超时时间)。这也就是等于说我们的statement超时设置没有起到应有的作用。所以取消失败这个情况很有必要仔细追查一下。

    再探Unknown thread id

    上文源码分析中已经提到,这个异常是在执行kill query xxid的时候抛出的,字面意思就是“未知的线程id”,根据字面意思我们可以猜测应该是mysql服务器无法识别该线程id,无法识别很有可能是因为这个线程id不存在,如果是这个线程id不存在那么又可能是因为什么原因不存在呢?

    1. 从来就没有存在过(应用层提供的id就是错误的)
    2. 线程存在过,因为某些原因可能已经被销毁了。

    要想得到上面问题的准确答案,我们可以在线下环境尝试复现一下这个异常场景,进行观察定位。于是我们需要做以下事情。
    1)监控mysql的线程
    我们要想监控到mysql服务的线程信息,可以通过information_schema.processlist表查看到当前运行的线程信息

    select  *  from    information_schema.processlist
    

    2)准备一个慢sql
    需要一个慢sql,起码要保证大部分情况下执行时间都超过1s,那么我们就可以把statement timeout设置为1s,这样执行该sql的时候就可以触发超时。
    我找了一个测试环境的数据表,表里有40多万的数据,然后写了个稍微复杂点的分组查询,最后经测试几乎每次执行耗时都要超过3s。该sql如下:

    select c_name,c_code,d_name,d_code,count(*) from t_xxx_xxx group by c_name,c_code,d_name,d_code  order by rand() limit 100
    

    大家不必太关注这个sql的业务逻辑【没有业务意义的】,仅仅就是为了造一个慢sql而已。
    3)代码以及配置准备
    1、在mapper语句层面配置超时

    <select id="getListFromGroupBy" resultType="java.util.Map"
    		useCache="false" timeout="1">
    		select
    		c_name,c_code,d_name,d_code,count(*) from
    		t_xxx_xxx group by
    		c_name,c_code,d_name,d_code order by rand() limit
    		100
    	</select>
    

    以上针对那个平均执行耗时超过3s的sql配置查询超时(statement 超时)为1s,这样能够9成以上概率是可以触发超时异常的。

    2、junit单元测试代码

    @Test
    public void testStatementTimeout() {
        List<Map<String, Object>> list = xxxBusiness.getListFromGroupBy(); // 最终会执行那个慢sql
        Assert.assertTrue(list.size() == 100);
    }
    

    4)启动测试
    1、启动单元测试
    2、从mysql线程信息中找到我们的sql
    由于我所使用的mysql用户权限较大,为了能够直观明了了关注到重点信息,修改了一下查看进行信息的sql

    select `ID`,`HOST`,`COMMAND`,`INFO` from information_schema.processlist  where db='db_xxx' and length(info)>0 order by id desc;
    

    这样我们就仅仅关注了db_xxx库的所有sql语句线程了。
    鉴于我们的业务sql需要3s左右才能执行完成,所以我们可以手动不间断的刷新这条查询线程的sql,就能够发现这个业务sql对应的记录信息。一旦发现就可以停止刷新了,因为再刷新可能记录就不存在了(因为业务sql执行完成之后线程就销毁了)。

    • 在单元测试刚启动,业务sql还没有执行的时候,我们得到的结果列表如下:
    IDHOSTCOMMANDINFO
    553063192.168.35.194:59007Queryselect ID,HOST,COMMAND,INFO from information_schema.processlist where db=‘db_xxx’ and length(info)>0 order by id desc LIMIT 0, 1000
    • 刷新几次后,业务sql开始执行单未执行完成时,我们得到的结果列表如下:
    IDHOSTCOMMANDINFO
    553696192.168.2.10:40730Queryselect c_name,c_code,d_name,d_code,count(*) from t_xxx_xxx group by c_name,c_code,d_name,d_code order by rand() limit 100
    553063192.168.35.194:59007Queryselect ID,HOST,COMMAND,INFO from information_schema.processlist where db=‘db_xxx’ and length(info)>0 order by id desc LIMIT 0, 1000

    备注:表格里的ID列对应的就是线程id,HOST列对应的是发起请求的客户端地址(192.168.35.194是我本机ip)
    3、从异常信息中获取线程id
    junit的异常堆栈信息如下

    org.springframework.jdbc.UncategorizedSQLException: 
    ### Error querying database.  Cause: java.sql.SQLException: Unknown thread id: 3805330
    ### The error may exist in mapper/xxx/XxxMapper.xml
    ### The error may involve defaultParameterMap
    ### The error occurred while setting parameters
    ### SQL: select c_name,c_code,d_name,d_code,count(*) from t_xxx_xxx group by c_name,c_code,d_name,d_code  order by rand() limit 100
    ### Cause: java.sql.SQLException: Unknown thread id: 3805330
    ; uncategorized SQLException for SQL []; SQL state [HY000]; error code [1094]; Unknown thread id: 3805330; nested exception is java.sql.SQLException: Unknown thread id: 3805330
    	at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:84)
    	at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
    	at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
    	// 省略了部分信息
    Caused by: java.sql.SQLException: Unknown thread id: 3805330
    	at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:959)
    	// 省略了部分信息
    	at com.mysql.jdbc.StatementImpl$CancelTask$1.run(StatementImpl.java:119)
    

    从异常信息中我们可以得到要取消的线程id为:3805330


    5)分析测试结果
    通过以上实验,我们成功复现了Unknown thread id异常。

    通过对mysql线程信息的观察,我们确定了业务sql对应的线程id是【553696】,但是取消线程要取消的线程id确是【3805330】,二者直观上看数值就相差悬殊,很显然我们要取消的是一个不正确的id。通过这样的现象可以回答我们之前的问题。

    线程id不存在是因为:从来就没有存在过(应用层提供的id就是错误的)。

    接下来我们分析为何应用层提供的id是错误的?

    还是继续分析我们测试过程中观察到的数据。之前说过mysql线程信息中HOST列表示的是通信的客户端地址,我通过本机的mysql GUI工具连接上远程数据库,并发起查询processlist的请求,所以结果集的第二行中的HOST对应我本机的ip:192.168.35.194。可是我同样是在本机进行的单元测试,为什么业务sql对应的ip是:192.168.2.10而不是192.168.35.194呢?

    这是因为我们用了ProxySQL,ProxySQL是一个高性能的MySQL中间件,提供强大的规则引擎,他的其中一个特性是可帮助我们的应用程序透明的实现读写分离。

    浅析ProxySQL

    正是因为我们的应用连接的是ProxySQL,而不是真正的mysql,所以我们应用和数据库之间的请求响应都是由ProxySQL来代理转发的,所以才会出现,连接数据库的客户端是ProxySQL的所属主机而不是我本机。

    既然我们应用和mysql之间有一层中间件,那么应用层所得到的错误的id,会不会和ProxySQL有关,会不会是ProxySQL中的什么id呢?

    于是在ProxySQL的官网查阅了一番,最后得到了如下有效信息,可以通过以下sql查看ProxySQL的进程(线程)信息

    select * from stats_mysql_processlist
    

    鉴于数据库中间件由DBA维护,研发没有直接权限,所以求助DBA查询了一下,通过DBA给出的结果集发现,原来我们得到的【3805330】对应的是ProxySQL的SessionID。

    ProxySQL是一定要遵守mysql通信协议的,不然就谈不上对请求和接收两端透明;ProxySQL作为一个代理层中间件,为了实现自身的价值,使得他可能需要对通信内容进行部分更改(仅限于部分协议中规定的属性)以适应自身的功能需要。

    在应用层和mysql建立TCP连接后,mysql server端主动发起的握手请求报文中会包含有一个4字节的connection id值,这个值就对应着mysql的线程id,ProxySQL把这个id替换成自己的SessionID返回给应用层。

    应用层发起kill命令,附带的id就是SessionID,但是因为kill命令属于请求正文,ProxySQL不能分析和更改请求正文的,所以把kill query SessionID的命令发送到mysql时,mysql里根本找不到对应的ProxySQL的SessionID,所以抛出Unknown thread id异常。这里还有一个更危险的场景,如果恰好有一个mysql的线程id和SessionID相同,那就会是各种奇异现象了。

    所以我们得出的结论是:

    ProxySQL作为一个mysql中间件,对于mysql服务端发送给应用客户端的握手报文的connection id值进行了替换,导致了应用客户端在kill场景中,kill了错误的id,从而会引发此种Unknown thread id异常。注意:我们的结论是我们的场景下

    如何解决

    不使用ProxySQL

    接上文提到的测试用例,修改数据库连接配置,改为直连数据库。其他配置和代码一律维持原样。最终我们得到了如下异常:

    org.springframework.dao.QueryTimeoutException: 
    ### Error querying database.  Cause: com.mysql.jdbc.exceptions.MySQLTimeoutException: Statement cancelled due to timeout or client request
    ### The error may exist in mapper/xxx/XxxMapper.xml
    ### The error may involve defaultParameterMap
    ### The error occurred while setting parameters
    ### SQL: select c_name,c_code,d_name,d_code,count(*) from t_xxx_xxx group by c_name,c_code,d_name,d_code  order by rand() limit 100
    ### Cause: com.mysql.jdbc.exceptions.MySQLTimeoutException: Statement cancelled due to timeout or client request
    ; SQL []; Statement cancelled due to timeout or client request; nested exception is com.mysql.jdbc.exceptions.MySQLTimeoutException: Statement cancelled due to timeout or client request
    	at org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:118)
    	at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:73)
    	at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
    	at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
    	at org.mybatis.spring.MyBatisExceptionTranslator.translateExceptionIfPossible(MyBatisExceptionTranslator.java:73)
    	at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:371)
    	at com.sun.proxy.$Proxy31.selectList(Unknown Source)
    	at org.mybatis.spring.SqlSessionTemplate.selectList(SqlSessionTemplate.java:198)
    	at org.apache.ibatis.binding.MapperMethod.executeForMany(MapperMethod.java:122)
    	at org.apache.ibatis.binding.MapperMethod.execute(MapperMethod.java:64)
    	at org.apache.ibatis.binding.MapperProxy.invoke(MapperProxy.java:53)
    	at com.sun.proxy.$Proxy39.getListFromGroupBy(Unknown Source)
    	// 省略了部分信息
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.springframework.test.context.junit4.statements.RunBeforeTestMethodCallbacks.evaluate(RunBeforeTestMethodCallbacks.java:75)
    	at org.springframework.test.context.junit4.statements.RunAfterTestMethodCallbacks.evaluate(RunAfterTestMethodCallbacks.java:86)
    	at org.springframework.test.context.junit4.statements.SpringRepeat.evaluate(SpringRepeat.java:84)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
    	at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:254)
    	at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:89)
    	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
    	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
    	at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61)
    	at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:70)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
    	at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:193)
    	at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
    	at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
    	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
    	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:678)
    	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
    	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
    Caused by: com.mysql.jdbc.exceptions.MySQLTimeoutException: Statement cancelled due to timeout or client request
    	at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2765)
    	at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2550)
    	at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1861)
    	at com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:1192)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at com.mysql.jdbc.MultiHostConnectionProxy$JdbcInterfaceProxy.invoke(MultiHostConnectionProxy.java:91)
    	at com.sun.proxy.$Proxy165.execute(Unknown Source)
    	at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172)
    	at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172)
    	at org.apache.ibatis.executor.statement.PreparedStatementHandler.query(PreparedStatementHandler.java:62)
    	at org.apache.ibatis.executor.statement.RoutingStatementHandler.query(RoutingStatementHandler.java:78)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.apache.ibatis.plugin.Plugin.invoke(Plugin.java:63)
    	at com.sun.proxy.$Proxy161.query(Unknown Source)
    	at org.apache.ibatis.executor.ReuseExecutor.doQuery(ReuseExecutor.java:59)
    	at org.apache.ibatis.executor.BaseExecutor.queryFromDatabase(BaseExecutor.java:303)
    	at org.apache.ibatis.executor.BaseExecutor.query(BaseExecutor.java:154)
    	at org.apache.ibatis.executor.CachingExecutor.query(CachingExecutor.java:102)
    	at org.apache.ibatis.executor.CachingExecutor.query(CachingExecutor.java:82)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.apache.ibatis.plugin.Invocation.proceed(Invocation.java:49)
    	at org.apache.ibatis.plugin.Plugin.invoke(Plugin.java:61)
    	at com.sun.proxy.$Proxy160.query(Unknown Source)
    	at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:120)
    	at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:113)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:358)
    	... 42 more
    
    • spring框架的包装异常为:org.springframework.dao.QueryTimeoutException
    • 被spring包装的cause为:com.mysql.jdbc.exceptions.MySQLTimeoutException

    这也间接证明了我们之前的结论,statement超时场景下是因为ProxySQL的问题引发了Unknown thread id异常。

    不使用ProxySQL可以解决这个问题,但是也达不到我们用ProxySQL的目的了(对应用透明的读写分离)

    从ProxySql官网寻求答案

    经过一番求索,最终在ProxySql的issue板块中找到了答案。

    renecannao commented on 25 Nov 2018
    Feature implemented in 1.4.13 and 2.0.0

    ProxySql的作者针对类似问题的回复是在1.4.13版本和2.0.0版本这个问题得到了修复。

    于是和DBA确认了下线上环境和测试环境当前用的版本

    • 线上环境是1.3的版本
    • 测试环境是1.4.8的版本
    升级ProxySql

    为了验证官网的说法,DBA协助升级了测试环境的ProxySql版本到1.4.15。
    接上文提到的测试用例,修改数据库连接配置,再改为连接ProxySql。其他配置和代码一律维持原样。最终我们得到了如下异常:

    org.springframework.dao.QueryTimeoutException: 
    ### Error querying database.  Cause: com.mysql.jdbc.exceptions.MySQLTimeoutException: Statement cancelled due to timeout or client request
    ### The error may exist in mapper/xxx/XxxMapper.xml
    ### The error may involve defaultParameterMap
    ### The error occurred while setting parameters
    ### SQL: select c_name,c_code,d_name,d_code,count(*) from t_xxx_xxx group by c_name,c_code,d_name,d_code  order by rand() limit 100
    ### Cause: com.mysql.jdbc.exceptions.MySQLTimeoutException: Statement cancelled due to timeout or client request
    ; SQL []; Statement cancelled due to timeout or client request; nested exception is com.mysql.jdbc.exceptions.MySQLTimeoutException: Statement cancelled due to timeout or client request
    	at org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:118)
    //省略了部分信息
    

    可见我们通过升级ProxySql到1.4.13以上版本的时候,unknown thread id的问题得到解决。

    展开全文
  • 大型多人游戏中间件

    千次阅读 2010-02-25 16:16:00
    Massively Multiplayer Middleware 大型多人游戏中间件 by Michi Henning | February 24, 20
         

    Massively Multiplayer Middleware
    大型多人游戏中间件

    by Michi Henning | February 24, 2004

    翻译: andywang金庆

    Topic: Game Development

    Massively Multiplayer Middleware
    大型多人游戏中间件
    MICHI HENNING, ZeroC

    Building scaleable middleware for ultra-massive online games teaches a lesson we all can use: Big project, simple design.

    为超大型在线游戏构建可扩展的中间件给我们所有人的教训: 宏大的项目,简单的设计。

    Wish is a multiplayer, online, fantasy role-playing game being developed by Mutable Realms.1 It differs from similar online games in that it allows tens of thousands of players to participate in a single game world (instead of the few hundred players supported by other games). Allowing such a large number of players requires distributing the processing load over a number of machines and raises the problem of choosing an appropriate distribution technology.

     

    Wish游戏是Mutable Realms[1]开发的幻想类多人在线角色扮演游戏。 不同于类似的在线游戏,它允许数万的玩家同时参与到一个游戏世界中 (而其他游戏仅支持几百人)。 允许如此之多的玩家就要求在多台机器上进行分布式处理, 由此产生了一个问题:选择合适的分布式技术。

    DISTRIBUTION REQUIREMENTS

    分布式需求

    Mutable Realms approached ZeroC for the distribution requirements of Wish. ZeroC decided to develop a completely new middleware instead of using existing technology, such as CORBA (Common Object Request Broker Architecture).2 To understand the motivation for this choice, we need to examine a few of the requirements placed on middleware by games on the scale of Wish and other large-scale distributed applications.

     

    Mutable Realms带着Wish游戏的分布式需求找到ZeroC。 ZeroC决定开发一个全新的中间件,而不是使用现有的技术, 比如CORBA(Common Object Request Broker Architecture,公共对象请求代理体系)[2]。 为了理解这个选择的动机, 我们需要考察像Wish这种规模的游戏和其他大规模分布式应用程序对中间件的一些要求。

     

    Multi-Platform Support. The dominant platform for the online games market is Microsoft Windows, so the middleware has to support Windows. For the server side, Mutable Realms had early on decided to use Linux machines: The low cost of the platform, together with its reliability and rich tool support, made this an obvious choice. The middleware, therefore, had to support both Windows and Linux, with possible later support for Mac OS X and other Unix variants.

     

    多平台支持。 在线游戏市场的主要平台是Microsoft Windows,所以中间件必须支持Windows。 在服务器这边,Mutable Realms很早就决定使用Linux: 费用低,并且可靠性高,以及工具软件丰富,这是明摆着的选择。 因此,中间件必须要同时支持Windows和Linux, 也许以后还要支持Mac OS X和其他Unix变种。

     

    Multi-Language Support. Client and server software is written in Java, as well as a combination of C++ and assembly language for performance-critical functions. At ZeroC we used Java because some of our development staff had little prior C++ experience. Java also offers advantages in terms of defect count and development time; in particular, garbage collection eliminates the memory management errors that often plague C++ development. For administration of the game via the Web, we wanted to use the PHP hypertext processor. As a result, the game middleware had to support C++, Java, and PHP.

     

    多语言支持。 客户端和服务器软件是用Java编写的,同时, 对于性能关键的函数还混合使用了C++和汇编语言。 在ZeroC,我们使用Java,因为我们的部分开发人员先前没什么C++经验。 Java同时在缺陷数和开发时间上有优势; 特别是垃圾收集消除了内存管理错误,而这是C++开发的死疾。 为了通过Web管理游戏,我们想用PHP。 所以,这个游戏中间件必须支持C++、Java和PHP。

     

    Transport and Protocol Support. As we developed the initial distribution architecture for the game, it became clear that we were faced with certain requirements in terms of the underlying transports and protocols:

     

    传输和协议支持。 当我们为游戏开发最初的分布式构架时, 我们明显面临了对底层传输和协议的一些需求:

     

    •  Players connect to ISPs via telephone lines, as well as broadband links. While broadband is becoming increasingly popular, we had decided early on that the game had to be playable over an ordinary modem. This meant that communications between clients and server had to be possible via low-bandwidth and high-latency links.

     

    •  玩家通过电话线或者宽带连接ISP上网。 虽然宽带越来越普及,但是我们早已决定游戏必须能通过普通modem玩。 这意味着客户端服务器之间通过低带宽高延时链路进行通讯必须是可能的。

     

    •  Much of the game is event driven. For example, as a player moves around, other players in the same area need to be informed of the changes in the game world around them. These changes can be distributed as simple events such as, “Player A moves to new coordinates <x,y>.”

     

    •  游戏大部分是事件驱动的。 比如,当玩家四处溜达时, 这些变化需要通知同一区域内的其他玩家, 使他们知晓其周围的游戏世界中的变化。 这些变化可以作为简单的事件分发, 比如“玩家A移动到新坐标<x,y>”。

     

    Ideally, events are distributed via “datagrams.” If the occasional state update is lost, little harm is done: A lost event causes a particular observer’s view of the game world to lag behind momentarily, but that view becomes up-to-date again within a very short time, when another event is successfully delivered.

     

    理想的,事件是通过数据报(datagram)分发的。 即使偶尔有状态更新丢失,也没什么危害: 一个事件丢失会造成某个观察者对于游戏世界的视图暂时性滞后, 但很快,当另一个事件成功传送后,该视图又会变成最新状态。

     

    •  Events in the game often have more than one destination. For example, if a player moves within the field of vision of five other players, the same positional update must be sent to all five observing players. We wanted to be able to use broadcast or multicast to support such scenarios.

     

    •  游戏中的事件经常有多个目标。 比如,玩家在其他5个玩家的视野内移动, 位置更新必须发送给所有这5个看到他的玩家。 我们想用广播或多播来支持这样的场景。

     

    •  Communications between clients and game servers must be secure. For an online subscription-based game, this is necessary for revenue collection, as well as to prevent cheating. (For example, it must be impossible for a player to acquire a powerful artifact by manipulating the client-side software.)

     

    •  客户端和游戏服务器之间的通信必须是安全的。 对基于订购的在线游戏,这是必须的,既为了赚钱,也为了防做弊 (例如,玩家无法通过修改客户端软件来得到超级装备)。

     

    •  Clients connect to the game from LANs that are behind firewalls and use NAT (network address translation). The communications protocol for the game has to be designed in a way that accommodates NAT without requiring knowledge of application-specific information in order to translate addresses.

     

    •  客户端能在局域网内部通过防火墙和NAT(network address translation, 网络地址转换)连入游戏。 游戏的通信协议必须按以下方式设计:它能够穿越NAT, 但是地址转换无需应用程序特有的相关信息。

     

    Versioning Support. We wanted to be able to update the game world while the game was being played—for example, to add new items or quests. These updates have to be possible without requiring every deployed client to be upgraded immediately—that is, client software at an older revision level has to continue to work with the updated game servers (albeit without providing access to newly added features). This means that the type system has to be flexible enough to allow updates, such as adding a field to a structure or changing the signature of a method, without breaking deployed clients.

     

    版本控制支持。 我们需要能够在游戏运行中更新游戏,比如,增加新的物品或任务。 这些更新不要求所有已部署的客户端都马上升级, 即,旧版本的客户端软件必须能够连接已升级的服务器继续运行 (虽然不提供新增功能)。 这意味着类型系统必须要足够灵活以允许更新, 比如给结构添加新域,或者更改方法的签名, 而不会使已部署的客户端无法使用。

     

    Ease of Use. Although a few of the Wish game developers are distributed computing experts, the majority have little or no experience. This means that the middleware has to be easy for nonexperts to use, with simple, threadsafe and exception-safe APIs (application programming interfaces).

     

    易用性。 虽然有些Wish游戏的开发者是分布式计算的专家, 但是大部分经验不足甚至没有经验。 这意味着中间件必须让非专家也能容易使用, 具有简单的、线程安全的,和异常安全的API (application programming interface,应用程序接口)。

     

    Persistence. Much of the game requires state, such as the inventory for each player, to be stored in a database. We wanted to provide developers with a way to store and retrieve persistent state for application objects without having to concern themselves with the actual database and without having to design database schemas. Particularly during development, as the game evolves, it is prohibitively time consuming to repeatedly redesign schemas to accommodate changes. In addition, as we improve the game while being deployed, we must add new features to a database and remove older features from it. We wanted an automatic way to migrate an existing, populated database to a new database schema without losing any of the information in the old database that was still valid.

     

    持久化。 很多游戏都需要将状态保存数据库,如玩家的所有物品。 我们需要给开发者提供一种方法, 使之可以保存和获取应用对象的持久化状态, 但不必关心具体的数据库,也不必设计数据库模式(database schema)。 特别是开发过程中,当游戏不断演化, 为了适应变化而反复地修改表结构, 所耗的时间会让人望而却步。 而且,在游戏发布后,当我们改进游戏时, 必须添加新的属性到数据库,以及删除旧的属性。 我们需要一种自动的方式, 将现存的已填充的数据库迁移到新的数据库模式, 而不丢失旧库中任何仍然有效的信息。

     

    Threading. Much of the server-side processing is I/O-bound: Database and network access forces servers to wait for I/O completion. Other tasks, such as pathfinding, are compute-bound and can best be supported using parallel algorithms. This means that the middleware has to be inherently threaded and offer developers sufficient control over threading strategies to implement parallel algorithms while preventing problems such as thread starvation and deadlock. Given the idiosyncrasies of threading on different operating systems, we also wanted a platform-neutral threading model with a portable API.

     

    多线程。 大多数服务端处理都是I/O密集型: 数据库和网络访问迫使服务器等待I/O完成。 其他任务,比如寻路,是计算密集型, 最好使用并行算法。 这意味着中间件必须内在地多线程化, 并且为开发者提供足够的对线程策略的控制, 以实现并行算法,同时避免如线程饥饿和死锁这样的问题。 考虑到不同操作系统上的线程特性, 我们需要平台无关的线程模型,并具有可移植的API。

     

    Scalability. Clearly, the most serious challenges for the middleware are in the area of scalability: For an online game, predicting realistic bounds is impossible on things such as the total number of subscribers or the number of concurrent players. This means that we need an architecture that can be scaled by federating servers (that is, adding more servers) as demands on the software increase.

     

    可扩展性。 显然,对中间件最重要的挑战是在可扩展性方面: 对于在线游戏,不可能预测到如订阅总数或同时在线玩家数这类数据的真实范围。 这意味着我们需要这样的架构,当对软件的需求增加时, 它可以联盟(federate)服务器来进行扩展(即添加更多服务器)。

     

    We also need fault-tolerance: For example, upgrading a server to a newer version of the game software has to be possible without kicking off every player currently using that server. The middleware has to be capable of automatically using a replica server while the original server is being upgraded.

     

    我们也需要容错性: 比如,将服务器升级到游戏软件的新版本必须是可能的, 而不应该踢掉当前正在使用该服务器的每个玩家。 中间件必须能够在原服务器正在升级时, 自动使用一个副本服务器(replica server)。

     

    Other scalability issues relate to resource management. For example, we did not want to be subject to hardwired limits, such as a maximum number of open connections or instantiated objects. This means that, wherever possible, the middleware has to provide automated resource management functions that are not subject to arbitrary limits and are easy to use. Simultaneously, these functions have to provide enough control for developers to tune resource management to their needs. Wherever possible, we wanted to be able to change resource management strategies without requiring recompilation.

     

    其他可扩展性问题涉及资源管理。 例如,我们不想受到硬限制,如最大打开连接数,或最大实例化对象数。 这意味着,只要有可能, 中间件必须提供不受任何限制的自动资源管理功能, 并且使用方便。 同时,这些功能必须给开发者提供足够的控制, 以按照他们的需要调整资源管理。 只要有可能,我们希望能够改变资源管理策略,而不要求重新编译。

     

    A common scalability problem for distributed multiplayer games relates to managing distributed sets of objects. The game might allow players to form guilds, subject to certain rules: For example, a player may not be a member of more than one guild, or a guild may have at most one level-5 mage (magician). In computing terms, implementing such behavior boils down to performing membership tests on sets of distributed objects. Efficient implementation of such set operations requires an object model that does not incur the cost of a remote message for each test. In other words, the object identities of objects must be visible at all times and must have a total order.

     

    分布式多人游戏共同的一个可扩展性问题是, 管理分布式对象集合。 游戏可能允许玩家们组成公会(guild),但要符合一定条件: 比如,玩家不可以加入多个公会,或者公会最多有1个5级魔法师。 用计算机术语说,实现这种行为归结为对分布式对象集执行成员资格测试。 有效地实现这种集合运算需要这样的对象模型, 它不用每次测试都发送远程消息。 也就是说,对象的标识对象要全程可见,并且具有全序关系。

     

    In classical RPC (remote procedure call) systems, object implementations reside in servers, and clients send remote messages to objects: All object behavior is on the server, with clients only invoking behavior, but not implementing it. Although this approach is attractive because it naturally extends the notion of a local procedure call to distributed scenarios, it causes significant problems:

     

    在经典RPC(remote procedure call,远程过程调用)系统中, 对象的实现位于服务器,客户端发送远程消息到对象: 所有对象行为都在服务器, 而客户端只是调用行为,而不是实现它。 尽管这个方法很有吸引力, 因为它将本地过程调用的概念自然地扩展到了分布式环境中, 但它造成了以下值得注意的问题:

     

    •  Sending a remote message is orders of magnitude slower than sending a local message. One obvious way to reduce network traffic is to create “fat” RPCs: as much data as possible is sent with each call to better amortize the cost of going on the wire. The downside of fat RPCs is that performance considerations interfere with object modeling: While the problem domain may call for fine-grained interfaces with many operations that exchange only a small amount of state, good performance requires coarse-grained interfaces. It is difficult to reconcile this design tension and find a suitable trade-off.

     

    •  发送远程消息比发送本地消息慢好几个数量级。 减少网络流量的一种明显的方法是创建“胖(fat)”RPC: 每次调用都发送尽可能多的数据,以便更好地分摊传输的成本。 胖RPC的缺点是, 性能方面的考虑干扰了对象建模: 虽然问题域可能需要细粒度的接口, 以及许多仅仅交换少量状态的操作, 但为了良好的性能,却要求粗粒度的接口。 人们很难调和这种设计的紧张局势并找到适合的折衷之法。

     

    •  Many objects have behavior and can be traded among players. Yet, to meet the processing requirements of the game, we have many servers (possibly in different continents) that implement object behavior. If behavior stays put in the server, yet players can trade objects, before long, players end up with a potion whose server is in the United States and a scroll whose server is in Europe, with the potion and scroll carried in a bag that resides in Australia. In other words, a pure client–server model does not permit client-side behavior and object migration, and, therefore, destroys locality of reference.

     

    •  很多对象有行为, 并且可以在玩家之间交易。 但是,为了满足游戏处理的要求, 我们有很多服务器(可能位于不同的大陆), 它们实现了对象的行为。 如果行为停留在服务器不动,而玩家可以交易对象, 那么,不久以后,有个玩家会有一瓶药在美国的服务器上, 一个卷轴在欧洲的服务器上, 而放这瓶药和卷轴的袋子却在澳大利亚的服务器上。 换句话说,纯粹的客户端-服务器模型不允许客户端的行为和对象迁移, 并且因此破坏了访问局部性(locality of reference)。

     

    We wanted an object model that supports both client- and server-side behavior so we could migrate objects and improve locality of reference.

     

    我们需要一个同时支持服务器端和客户端行为的对象模型, 这样我们就能迁移对象并改善访问局部性。

    DESIGNING A NEW MIDDLEWARE

    设计新的中间件

    Looking at our requirements, we quickly realized that existing middleware would be unsuitable. The cross-platform and multi-language requirements suggested CORBA; however, a few of us had previously built a commercial object request broker and knew from this experience that CORBA could not satisfy our functionality and scalability requirements. Consequently, we decided to develop our own middleware, dubbed Ice (short for Internet Communications Engine).3

     

    看到需求,我们马上意识到现有的中间件没有合适的。 多平台和多语言的需求使人想到CORBA; 但是,我们几个以前曾经建立了一个商业性的对象请求代理, 并且从这次经历知道,CORBA不能满足我们的功能和可扩展性需求。 因此,我们决定开发自己的中间件, 称为Ice(互联网通信引擎,Internet Communications Engine的简称)[3]。

     

    The overriding focus in the design of Ice was on simplicity: We knew from bitter experience that every feature is paid for in increased code and memory size, more complex APIs, steeper learning curve, and reduced performance. We made every effort to find the simplest possible abstractions (without passing the “complexity buck” to the developer), and we admitted features only after we were certain that we absolutely had to have them.

     

    Ice中设计的首要重点是简单性: 我们从过去的痛苦经验得知, 每个功能的代价就是增加的代码和内存占用、 更复杂的API、陡峭的学习曲线,以及降低的性能。 我们尽了一切努力寻找最简单的抽象(不是把复杂性推卸给开发者), 并且我们只接受我们确信绝对必要的功能。

     

    Object Model. Ice restricts its object model to a bare minimum: Built-in data types are limited to signed integers, floating-point numbers, Booleans, Unicode strings, and 8-bit uninterpreted (binary) bytes. User-defined types include constants, enumerations, structures, sequences, dictionaries, and exceptions with inheritance. Remote objects are modeled as interfaces with multiple inheritance that contain operations with input and output parameters and a return value. Interfaces are passed by reference—that is, passing an interface passes an invocation handle via which an object can be invoked remotely.

     

    对象模型。 Ice将其对象模型限制到了最小化。 内置数据类型仅限于有符号整数、浮点数、布尔型、Unicode字符串 和8位无解释(二进制)字节。 用户定义的类型包括常量、枚举、结构、序列、字典,以及带继承的异常。 远程对象建模为带多重继承的接口, 而接口包含具有输入输出参数和返回值的各种操作。 接口通过引用传递,也就是说,传递接口会传递一个调用句柄, 通过该句柄,就可以远程调用一个对象。

     

    To support client-side behavior and object migration, we added classes: operation invocations on a class execute in the client’s address space (instead of the server’s, as is the case for interfaces). In addition, classes can have state (whereas interfaces, at the object-modeling level, are always stateless). Classes are passed by value—that is, passing a class instance passes the state of the class instead of a handle to a remote object.

     

    为了支持客户端行为和对象迁移,我们添加了类: 类上调用的操作执行于客户端的地址空间 (而不是就接口而言的服务器端)。 此外,类可以有状态(而接口,在对象建模的层次,总是无状态的)。 类是按值传递的,也就是说,传递类实例会将该类的状态传递到远程对象, 而不是传递句柄。

     

    We did not attempt to pass behavior: This would require a virtual execution environment for objects but would be in conflict with our performance and multi-language requirements. Instead, we implemented identical behavior for a class at all its possible host locations (clients and servers): Rather than shipping code around, we provide the code wherever it is needed and ship only the state. To migrate an object, a process passes a class instance to another process and then destroys its copy of the instance; semantically, the effect is the same as migrating both state and behavior.

     

    我们没有试图传递行为: 这将要求为对象建立一个虚拟执行环境, 这与我们的性能和多语言需求相冲突。 相反,我们实现了类在其所有可能的主机位置(客户端和服务器)上 具有同一行为: 我们不是到处分发代码,而是在有需要的地方提供代码, 并且仅仅分发状态。 为了迁移对象,进程传递类实例到另一进程, 然后消毁自己的实例拷贝; 在语义上,其效果与同时迁移状态和行为是相同的。

     

    Architecturally, implementing object migration in this way is a two-edged sword because it requires all host locations to implement identical (as opposed to merely similar) behavior. This has ramifications for versioning: If we change the behavior of a class at one host location, we must change the behavior of that class at all other locations (or suffer inconsistent behavior). Multiple languages also require attention. For example, if a class instance passes from a C++ server to a Java client, we must provide C++ and Java implementations with identical behavior. (Obviously, this requires more effort than implementing the behavior just once in a single language and single server.)

     

    在架构上,以这种方式实现对象迁移是一把双刃剑, 因为它要求所有的主机位置实现同一行为(而非仅仅类似)。 这对版本控制有复杂的影响: 如果我们在一个主机位置改变类的行为, 我们必须改变那个类在所有其他位置的行为 (否则就会有不一致的行为)。 多语言也要注意。 比如,如果类实例从C++服务器传递到Java客户端, 我们必须提供具有同一行为的C++和Java实现。 (显然,相比用单一语言,在单一服务器上, 仅仅实现一次行为,这需要更多的努力。)

     

    For environments such as Wish, where we control both client and server deployment, this is acceptable; for applications that provide only servers and rely on other parties to provide clients, this can be problematic because ensuring identical behavior of third-party class implementations is difficult.

     

    对于像Wish这样的环境,我们控制了客户端和服务器的部署,这是可以接受的; 但对于只提供服务器,而依靠其他各方提供客户端的情况,这就有问题, 因为很难确保第3方类的实现具有同一行为。

     

    Protocol Design. To meet our performance goals, we broke with established wisdom for RPC protocols in two ways:

     

    协议设计。 为了满足性能目标,我们摒弃了RPC协议中的2个既定观念:

     

    •  Data is not tagged with its type on the wire and is encoded as compactly as possible: The encoding uses no padding (everything is byte-aligned) and applies a number of simple techniques to save bandwidth. For example, positive integers less than 255 require a single byte instead of four bytes, and strings are not NUL terminated. This encoding is more compact (sometimes by a factor of two or more, depending on the type of data) than CORBA’s CDR (common data representation) encoding.

     

    •  数据传输时没有类型标记,并且尽可能的以紧凑格式编码: 编码没有填充(结构都以1字节对齐), 并且使用了一些简单的技术节省带宽。 例如,小于255的正整数需要1个字节而不是4字节, 字符串不以NUL结尾。 这样的编码比CORBA的CDR (common data representation,通用数据表示) 编码更紧凑(有时可达到2倍以上,视数据类型而定)。

     

    •  Data is always marshaled in little-endian byte order. We rejected a receiver-makes-it-right approach (as used by CORBA) because experiments showed no measurable performance gain.

     

    •  数据总是以little-endian字节序组编(marshal)。 我们不使用receiver-makes-it-right(接收者让它正确)方法(它用于CORBA), 因为实验表明该方法没有可观的性能提升。

     

    The protocol supports compression for better performance over low-speed links. (Interestingly, for high-speed links, compression is best disabled: It takes more time to compress data than to send it uncompressed.)

     

    协议支持压缩,以在低速链路上有更好的性能。 (有趣的是,对高速链路,最好禁用压缩:压缩数据太费时,还不如不压缩传输。)

     

    The protocol encodes request data as a byte count followed by the payload as a blob. This allows the receiver of a message to forward it to a number of downstream receivers without the need to unmarshal and remarshal the message. Avoiding this cost was important so we could build efficient message switches for event distribution.

     

    协议将请求数据编码为一个字节计数加一个blob净荷。 这允许消息接收者向下游接收者转发消息, 而无需解编(unmarshal)和重新组编(remarshal)。 避免这种消耗很重要,这样我们才能为事件分发建立高效的消息分派。

     

    The protocol supports TCP/IP and UDP (user datagram protocol). For secure communications, we use SSL (secure sockets layer): It is freely available and has been extensively scrutinized for flaws by the security community.

     

    协议支持TCP/IP和UDP(user datagram protocol,用户数据报协议)。 为了安全通信,我们使用SSL(secure sockets layer,安全套接字层): 它是免费的,并且安全社区已经对它进行了广泛的检验。

     

    The protocol is bidirectional, so a server can make a callback over a connection that was previously established by a client. This is important for communication through firewalls, which usually permit outgoing connections, but not incoming ones. The protocol also works across NAT boundaries.

     

    协议是双向的,所以服务器可以在客户端先前建立的连接上进行回调。 这对穿越防火墙的通信很重要,防火墙通常允许向外的连接,而不能被连入。 该协议同时还能穿越NAT边界。

     

    Classes make the protocol more complex because they are polymorphic: If a process sends a derived instance to a receiver that understands only a base type of that instance, the Ice runtime slices the instance to the most-derived base type that is known to the receiver. Slicing requires the receiver to unmarshal data whose type is unknown. Further, classes can be self-referential and form arbitrary graphs of nodes: Given a starting node, the Ice runtime marshals all reachable nodes so graphs require the sender to perform cycle detection.

     

    类使得协议更复杂,因为类是多态的: 如果进程发送了一个派生类的实例给接收者, 而接收者只知道该实例的基类的话, Ice运行库(runtime)会将该实例剪切(slice)为接收者所知道的最近基类。 剪切要求接收者解编未知类型的数据。 而且,类可以是自引用的(self-referential), 并形成任意的节点图: 给定一个起始节点,Ice运行库会组编所有可达的节点, 因此节点图要求发送者执行环检测。

     

    The implementation of slicing and class graphs is surprisingly complex. To support unmarshaling, the protocol sends classes as individually encapsulated slices, each tagged with their type. On average (compared with structures), this requires 10 to 15 percent extra bandwidth. To preserve the identity relationships of nodes and to detect cycles, the marshaling code creates additional data structures. On average, this incurs a performance penalty of 5 to 10 percent. Finally, for C++, we had to write a garbage collector to avoid memory leaks in the presence of cyclic class graphs, which was nontrivial. Without slicing and class graphs, the protocol implementation would have been simpler and (for classes) slightly faster.

     

    剪切和类图的实现异常复杂。 为了支持解编,协议按单独封装的剪切片发送类,每个都标记其类型。 平均而言(对比结构),这需要10%-15%的额外带宽。 为了保存节点关系标识,也为了检测环, 组编代码会生成附加的数据结构。 平均而言,这会产生5%-10%的性能损耗。 最后,对于C++,我们必须编写垃圾收集器, 以避免环形类图表示中的内存泄漏, 这也不容易。 如果没有剪切和类图,协议实现将更简单,(对类来说)也略微更快些。

     

    Versioning. The object model supports multiple interfaces: Instead of having a single most-derived interface, an object can provide any number of interfaces. Given a handle to an object, clients can request a specific interface at runtime using a safe downcast. Multiple interfaces permit versioning of objects without breaking on-the-wire compatibility: To create a newer version, we add new interfaces to existing objects. Already-deployed clients continue to work with the old interfaces, whereas new clients can use the new interfaces.

     

    版本控制。 对象模型支持多接口: 对象可以提供任意多的接口,而不是单一的最终接口。 给定一个对象的句柄,客户端可以在运行时利用安全的向下转型,来请求一个特定接口。 多接口允许对象的版本控制,而不会破坏已上线对象的兼容性: 要创建一个新的版本,我们会在现有对象上添加新的接口。 已经部署的客户端使用旧接口继续运行,而新客户端可以使用新接口。

     

    Used naively, multiple interfaces can lead to a versioning mess that forces clients to continuously choose the correct version. To avoid these problems, we designed the game such that clients access it via a small number of bootstrap objects for which they choose an interface version. Thereafter, clients acquire handles to other objects via their chosen interfaces on bootstrap objects, so the desired version is known implicitly to the bootstrap object. The Ice protocol provides a mechanism for implicit propagation of contextual information such as versioning, so we need not pollute all our object interfaces by adding an extra version parameter.

     

    如果盲目使用,多接口可以导致版本混乱, 使得客户端要不断地选择正确版本。 为避免这些问题,我们这样设计游戏: 让客户端通过几个引导对象访问接口, 客户端只需为引导对象选择一个接口版本。 因此,客户端通过其所选的引导对象接口,来获得其他对象的句柄, 引导对象完全知道需要的版本。 对于比如版本这样的上下文信息, Ice协议提供了一个隐式传播机制, 我们不必为所有对象的接口都添加额外的版本参数。

     

    Multiple interfaces reduced development time of the game because, apart from versioning, they allowed us to use loose coupling at the type level between clients and servers. Instead of modifying the definition of an existing interface, we could add new features by adding new interfaces. This reduced the number of dependencies across the system and shielded developers from each others’ changes and the associated compilation avalanches that often ensue.

     

    多接口缩短了游戏开发时间,因为,除了版本之外, 多接口还允许我们在客户端和服务器之间的类型级别中使用松耦合。 不修改现有接口的定义,我们就可以通过添加新接口来增加新功能。 这减少了整个系统的依赖数量, 并且屏蔽了开发者相互间的变更, 以及由此带来的相关的编译雪崩(compilation avalanche)。

    On the downside, multiple interfaces incur a loss of static type safety because interfaces are selected only at runtime, which makes the system more vulnerable to latent bugs that can escape testing. When used judiciously, however, multiple interfaces are useful in combating the often excessively tight coupling of traditional RPC approaches.

     

    多接口的缺点是,它招致了静态类型安全性的损失, 因为接口只在运行时被选择, 这使得系统更容易受到潜伏错误的伤害, 即那些逃过测试的错误。 但是,如果被明智地使用, 相比传统RPC方法中往往过紧的耦合, 多接口更为有用。

     

    Ease of Use. Ease of use is an overriding design goal. On the one hand, this means that we keep the runtime APIs as simple and small as possible. For example, 29 lines of specification are sufficient to define the API to the Ice object adapter. Despite this, the object adapter is fully functional and supports flexible object implementations, such as separate servant per object, one-to-many mappings of servants to objects, default servants, servant locators, and evictors. By spending a lot of time on the design, we not only kept the APIs small, but also reaped performance gains as a result of smaller code and working set sizes.

     

    易用性。 易用性是设计的首要目标。 一方面,这意味着我们要保持运行库API尽可能的简单,尽可能的小。 比如,29行的说明足以定义Ice对象适配器(object adapter)的API。 尽管如此,对象适配器是功能完善的,并且支持灵活的对象实现, 比如每个对象有单独的servant(服务者)、 servant与对象是一对多关系、 默认servant、servant locator(服务者定位器), 和evictor(逐出器)。 通过长时间的设计,我们不仅使API很小, 而且由于更小的代码和工作集而获得了性能提升。

     

    On the other hand, we want language mappings that are simple and intuitive. Limiting ourselves to a small object model paid off here—fewer types mean less generated code and smaller APIs.

     

    另一方面,我们想要简单直观的语言映射。 我们把自己限制为小型对象模型,在这里得到了补偿: 更少的类型意味着更少的生成代码和更小的API。

     

    The C++ mapping is particularly important: From CORBA, we knew that a poorly designed mapping increases development time and defect count, and we wanted something safer. We settled on a mapping that is small (documented in 40 pages) and provides a high level of convenience and safety. In particular, the mapping is integrated with the C++ standard template library, is fully threadsafe, and requires no memory management. Developers never need to deallocate anything, and exceptions cannot cause memory leaks.

     

    C++映射尤其重要: 从CORBA我们知道, 设计不良的映射会增加开发时间和缺陷数, 而我们想要更安全的东西。 我们最终决定的映射很小(文档只有40页), 并且提供了高水平的便利性和安全性。 尤其是,映射集成了C++标准库,是完全线程安全的, 并且不需要内存管理。 开发者不用释放任何东西,并且异常也不会引起内存泄漏。

     

    One issue we repeatedly encounter for language mappings is namespace collision. Each language has its own set of keywords, library namespaces, and so on. If the (language-independent) object model uses a name that is reserved in a particular target language, we must map around the resulting collision. Such collisions can be surprisingly subtle and confirmed, yet again, that API design (especially generic API design, such as for a language mapping) is difficult and time consuming. The choice of the trade-off between ease of use and functionality also can be contentious (such as our choice to disallow underscores in object-model identifiers to create a collision-free namespace).

     

    语言映射中,我们多次遇到的一个的问题是,名字空间冲突。 每个语言都有它们自己的一套关键字、库名字空间,等等, 如果(语言无关的)对象模型使用了一个特定目标语言中保留的名字, 我们必须重新映射来绕过产生的冲突。 这种冲突会出奇的微妙,并且反复地出现,以致于API设计费时又费力 (特别是通用API设计,比如语言映射)。 在易用性和功能性之间权衡的选择也可能是有争议的 (比如我们选择不允许对象模型的标识符用下划线来产生不冲突的名字空间)。

     

    Persistence. To provide object persistence, we extended the object model to permit the definition of persistence attributes for objects. To the developer, making an object persistent consists of defining those attributes that should be stored in the database. A compiler processes these definitions and generates a runtime library that implements associative containers for each type of object.

     

    持久化。 为了提供对象持久化,我们扩展了对象模型, 允许为对象定义持久化属性。 对开发者来说,让对象持久化就是定义那些应该存储在数据库中的属性。 编译器处理这些定义,并生成运行时库, 为每种类型的对象实现关联容器。

     

    Developers access persistent objects by looking them up in a container by their keys—if an object is not yet in memory, it is transparently loaded from the database. To update objects, developers simply assign to their state attributes. Objects are automatically written to the database by the Ice runtime. (Various policies can be used to control under what circumstances a physical database update takes place.)

     

    开发者访问持久对象时,要在容器中用其键值查找它们, 如果对象尚未在内存中,它会透明地从数据库加载。 当更新对象时,开发者只要对它们的状态属性赋值。 Ice运行库会自动地把对象写入数据库 (可以使用各种策略来控制在何种情况下执行物理数据库更新)。

     

    This model makes database access completely transparent. For circumstances in which greater control is required, a small API allows developers to establish transaction boundaries and preserve database integrity.

     

    这个模型使得数据库访问完全透明。 对于需要更多控制的情况来说, 小型的API允许开发者建立事务的边界并维护数据库的完整性。

     

    To allow us to change the game without continuously having to migrate databases to new schemas, we developed a database transformation tool. For simple feature additions, we supply the tool with the old and new object definitions—the tool automatically generates a new database schema and migrates the contents of the old database to conform to the new schema. For more complex changes, such as changing the name of a structure field or changing the key type of a dictionary, the tool creates a default transformation script in XML that a developer can modify to implement the desired migration action.

     

    为了允许我们改变游戏,但不必反复地迁移数据库到新的模式, 我们开发了一个数据库转换工具。 对于简单的功能添加,我们只要向工具提供新旧对象的定义, 工具会自动地生成一个新的数据库模式, 并迁移旧库的内容以符合新的模式。 对更复杂的变更,比如改变结构的字段名, 或者改变字典键值的类型, 工具会创建一个默认的XML转换脚本, 开发者可以修改它以实现所需的迁移动作。

     

    This tool has been useful, although we keep thinking of new features that could be incorporated. As always, the difficulty is in knowing when to stop: The temptation to build better tools can easily detract from the overall project goals. (“Inside every big program is a little program struggling to get out.”)

     

    这个工具很有用,尽管我们不断地想到还可以纳入的新功能。 就像通常一样,困难在于知道何时停止: 建立更好的工具,这个诱惑很容易降低整个项目的目标。 (“Inside every big program is a little program struggling to get out.”, “在每个大程序中,都有一个小程序挣扎着要出来。”)

     

    Threading. We built a portable threading API that provides developers with platform-independent threading and locking primitives. For remote call dispatch, we decided to support only a leader/followers threading model.4 In some situations, in which a blocking or reactive model would be better suited, this decision cost us a little in performance, but it gained us a simpler runtime and APIs and reduced the potential for deadlock in nested RPCs.

     

    多线程。 我们建立了一个可移植的线程API, 给开发者提供了平台无关的线程和锁原语。 对于远程调用派发,我们决定仅支持leader/follower(领导者/跟随者)线程模型[4]。 在阻塞或者反应模式更合适的情况下, 这个决定使我们损失了一点性能, 但是它让我们得到了更简单的运行库和API, 并且减少了嵌套RPC调用中死锁的可能性。

     

    Scalability. Ice permits redundant implementations of objects in different servers. The runtime automatically binds to one of an object’s replicas and, if a replica becomes unavailable, fails over to another replica. The binding information for replicas is kept in configuration and is dynamically acquired at runtime, so adding a redundant server requires only a configuration update, not changes in source code. This allows us to take down a game server for a software upgrade without having to kick all players using that server out of the game. The same mechanism also provides fault tolerance in case of hardware failure.

     

    扩展性。 Ice允许在不同的服务器上冗余的对象实现。 运行库会自动绑定到一个对象副本(replica), 并且,如果副本不可用时, 会自动切换(fail over)到另一个副本。 副本的绑定信息保存在配置中,并在运行时动态获得, 所以添加冗余服务器只需更新配置,不用修改源码。 这允许我们卸下游戏服务器做软件升级, 而不必把所有使用这个服务器的玩家都踢出游戏。 同样的机制还提供了硬件故障时的容错能力。

     

    To support federating logical functions across a number of servers and to share load, we built an implementation repository that delivers binding information to clients at runtime. A randomizing algorithm distributes load across any number of servers that form a logical service.

     

    为了支持在多台服务器上联盟(federate)逻辑函数, 以分担负载, 我们建立了一个实现库(implementation repository), 它会在运行时传送绑定信息到客户端。 一个随机算法会分配负载到任意数量的服务器,让它们组成一个逻辑服务。

     

    We made a number of trade-offs for replication and load sharing. For example, not all game components can be upgraded without server shutdown, and a load feedback mechanism would provide better load sharing than simple randomization. Given our requirements, these limitations are acceptable, but, for applications with more stringent requirements, this might not be the case. The skill is in deciding when not to build something as much as when to build it—infrastructure makes no sense if the cost of developing it exceeds the savings during its use.

     

    我们为复制和负载共享作出了若干权衡。 比如,不是所有的游戏组件都能在线升级, 以及负载反馈机制将比简单的随机化提供更优的负载共享。 鉴于我们的需求,这些限制是可以接受的, 但是,对于需求更严格的应用,可能并非如此。 技巧在于决定何时不要建立某个东西,其重要性等同于决定何时要建立它: 如果基础设施的开发成本超过使用它所节省的成本,这就毫无意义了。

    SIMPLE IS BETTER

    简单就好

    Our experiences with Ice during game development have been very positive. Despite running a distributed system that involves dozens of servers and thousands of clients, the middleware has not been a performance bottleneck.

     

    在游戏开发中,我们对Ice的感觉是非常肯定的。 虽然运行的是包含几十台服务器和数千客户端的分布式系统, 但中间件并不是性能瓶颈。

     

    Our focus on simplicity during design paid off many times during development. When it comes to middleware, simpler is better: A well-chosen and small feature set contributes to timely development, as well as to meeting performance goals.

     

    在设计中对简单性的专注,让我们在开发中赢得了数倍的回报。 对于中间件,简单就好: 一个精心挑选并且小型化的功能集,有利于按时开发,以及达到性能目标。

     

    Finally, designing and implementing middleware is difficult and costly, even with many years of experience. If you are looking for middleware, chances are that you will be better off buying it than building it.

     

    最后,设计和实现中间件是困难和昂贵的,即使对于有多年经验的老手。 如果你正在寻找中间件,很有可能你最好是购买一个而不是构建一个。

    REFERENCES

    参考

    1. Mutable Realms (Wish home page): see http://www.mutablerealms.com .

    2. Henning, M., and S. Vinoski. Advanced CORBA Programming with C++. Addison-Wesley, Reading: MA, 1999.

    3. ZeroC. Distributed Programming with Ice: see http://www.zeroc.com/Ice-Manual.pdf .

    4. Schmidt, D. C., O’Ryan, C., Pyarali, I., Kircher, M., and Buschmann, F. Leader/ Followers: A design pattern for efficient multithreaded event demultiplexing and dispatching. Proceedings of the 7th Pattern Languages of Programs Conference (PLoP 2000); http://deuce.doc.wustl.edu/doc/pspdfs/lf.pdf .

     

    MICHI HENNING (michi@zeroc.com ) is chief scientist of ZeroC. From 1995 to 2002, he worked on CORBA as a member of the Object Management Group’s Architecture Board and as an ORB implementer, consultant, and trainer. With Steve Vinoski, he wrote Advanced CORBA Programming with C++ (Addison-Wesley, 1999), the definitive text in the field. Since joining ZeroC, he has worked on the design and implementation of Ice and in 2003 coauthored “Distributed Programming with Ice” for ZeroC. He holds an honors degree in computer science from the University of Queensland, Australia.

     

    Originally published in Queue vol. 1, no. 10
    see this item in the ACM Digital Library

    Back to top

    展开全文
  • WAS Performance Tuning (WAS 性能调优).ppt 在myeclipse6中配置websphere6.1.doc WEBSPHERE纵向应用部署说明文档.doc Websphere下部署tomcat程序配置差异.doc Ubuntu 下设定websphere开机自启动.docx IBM WebSphere...
  • 目前项目需要对中间件,jdk进行升级,升级内容如下: was 6.5 → was 8.5 2, jdk 5 → jdk 6/7 报错的功能是基于cxf的webservice,所用axix版本为1.4。而在was中却出现了调用org.apache.axis2..的字样,在项目...
  • was6.1上部署一个web应用,使用Oracle10g数据库,使用连接池方式连接数据库,使用quartz跑自动任务,它就偶尔报下面的错误,请教各位同仁如何解决,客户已上线,问题有点急,如下是报错代码:...
  • 格物致知,格Hystrix 某电子商务网站在一个黑色星期五发生过载.过多的并发请求,导致用户支付的请求延迟很久没有响应,在等待很长时间后... at java.net.AbstractPlainSocketImpl.connectToAddress(Unknown Source) ...
  •  java.lang.reflect.InvocationTarget... 最近项目用到WAS8.0,碰到一技术问题,花费了我不短的时间去解决,这里记录下,具体思路问题,应用环境如下: 本地开发环境: WinXp sp2\Eclipse3.5\Jetty\Jboss5\M...
  • 几百行代码写个Mybatis,原理搞的透透的!

    万次阅读 多人点赞 2021-08-02 08:37:14
    } 测试结果 22:53:14.759 [main] DEBUG o.s.c.e.PropertySourcesPropertyResolver - Could not find key 'spring.liveBeansView.mbeanDomain' in any property source 22:53:14.760 [main] DEBUG o.s.b.f.s....
  • at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.tomcat.jdbc.pool.StatementFacade$StatementProxy.invoke(StatementFacade.java:114) at com.sun.proxy.$Proxy6.executeQuery(Unknown Source)...
  • at javax.net.ssl.impl.SSLSocketImpl.close(Unknown Source) at weblogic.nodemanager.server.Handler.run(Handler.java:76) at java.lang.Thread.run(Thread.java:662) 2014-5-21 23:03:00 weblogic....
  • javax.net.ssl.SSLKeyException: FATAL Alert:BAD_CERTIFICATE - A corrupt or unuseable certificate was received. at com.certicom.tls.interfaceimpl.TLSConnectionImpl.fireException(Unknown Source) at ...
  • 76) at sun.reflect.GeneratedMethodAccessor102.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method....
  • node denoI am a JavaScript/Node developer who secretly likes (actually, loves and adores)... I have been a huge fan of Deno ever since it was announced and I've been wanting to play with it. 我是一个J...
  • Centos7 安装RabbitMQ 3.7.26前言一、准备工作1、先安装一堆用到的依赖2、erlang安装1、查找对应版本2、下载安装包3、解压安装4...新接手一个项目用这个中间件,以前没接触过,今天跟着教程安装,记录一下。 同事给的包
  • 数据库中间件软件。(天上飞的理念,必然有地上落地的实现)。MyCat 官网。http://www.mycat.io/应用。读写分离。数据分片。MyCat 多数据源整合。MyCat 原理。**拦截**。安装、使用。安装。3 个配置文件。启动。登录...
  • console.error('The error raised was:', err); }); 使用流基类 可读流 - JSON 行解析器 可读流被用来为 I/O 源提供灵活的 API,也可以被用作解析器: 继承自 steam.Readable 类 并实现一个 _read(size) 方法 ...
  • at weblogic.security.service.SecurityManager.runAs(Unknown Source) at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2140) at weblogic.servlet.internal....
  • redis源码分析之sever.c

    2019-05-21 19:51:28
    #include "server.h" #include "cluster.h" #include "slowlog.h" #include "bio.h" #include "latency.h" #include "atomicvar.h" #include <time.h> #include <signal.h>...sys/wait.h>...
  • redis源码分析之config.c

    千次阅读 2019-06-24 10:40:56
    /* Wrapper for configEnumGetName() returning "unknown" insetad of NULL if * there is no match. */ const char *configEnumGetNameOrUnknown(configEnum *ce, int val) { const char *name = ...
  • INFO | jvm 2 | 2016/08/25 18:59:42 | Error: Exception thrown by the agent : java.net.MalformedURLException: Local host name unknown: java.net.UnknownHostException: mycat: mycat: unknown error ...
  • 友云音在部署NC5系列产品时,NC和WAS中间件所使用的参数是不同的。使用IBM的jdk也应当视为使用WAS中间件。 解决办法 2种办法都可解决此类问题。 在master参数前增加-Xjit:exclude={com/cn/bup/Hsed.*} 参数,重启...
  • Massively Multiplayer Middleware大型多人游戏中间件MICHI HENNING, ZeroCBuilding scaleable middleware for ultra-massive online games teaches a lessonwe all can use: Big project, simple design....
  • 67."server.network-backend has a unknown value:", 68.srv->srvconf.network_backend); 69.return-1; 70.} 71.} 72.switch(backend){/*根据最终选择的数据读写方式,关联回调函数。*/ 73.case NETWORK_...
  • LAMP环境安装rabbitmq

    2017-08-16 12:08:52
    -detached was passed.)) /usr/local/rabbitmq/sbin/rabbitmqctl status 查看状态 /usr/local/rabbitmq/sbin/rabbitmqctl stop 关闭rabbitmq 启用管理插件 mkdir /etc/rabbitmq /usr/local/rabbitmq...
  • classmethod from_curl(curl_command, ignore_unknown_options=True, **kwargs)[source] 从包含了cURL 命令的字符串种创建以请求对象,它填充HTTP方法,URL,请求头,cookie 和 body。它使用request类相同的参数,...

空空如也

空空如也

1 2 3 4 5 ... 12
收藏数 221
精华内容 88
热门标签
关键字:

was中间件(unknownsource)

友情链接: socket_ftp.zip