Monthly Archives: September 2013

Doing the SDE Conversions

I’m not planning on stopping doing these any time soon, as I need them for my own porpoises, but I thought it might be worth having a few instructions out there, so people can run their own conversions of the EVE SDE. Just  in case I get hit by a bus, or mauled by crazed Cetaceans.

The process is now a /lot/ simpler than it was before. Not as simple as I’d like, but pretty close.

With the first draft of this, there will be no screenshots, and it’s entirely from memory. I may update it when I do the next run.

Environment setup

This should only need to be done the once, rather than each time you want to do the conversion.

On your Windows box:

  1. Install SQL Server Express Edition 2012 with Tools.  Find it here. You want ENU\x64\SQLEXPRWT_x64_ENU.exe or ENU\x86\SQLEXPRWT_x86_ENU.exe
  2. Install Mysql Server. Find it here.
  3. Install the Mysql Workbench. Find it here.
  4. Install openDBCopy. Find it here. You’ll also want a copy of Java 7 JDK installed, and the JAVA_HOME environment variable set to point at it.
  5. drop of copy of the jTDS and mysql jdbc drivers into the opendbcopy.copy plugin lib directory. The path should be something like C:\Program Files\openDBcopy\plugins\opendbcopy.copy\lib
  6. Configure SQL Server to allow for TCP connections. Something like this guide should help.
  7. In mysql, you want to define lower_case_table_names=2 in the my.ini, for [mysqld]. Mine had to go into c:/windows/my.ini This may be useful.
  8. Create a database in mysql for doing the conversions into. Creating a user is good practice too, though you could just use the root user. Make sure the database is set for utf8.

If you’re wanting to do a postgres or sqlite conversion, you’ll want a linux box too. Probably can do it on windows, but I’ve never done it there. If you have a linux box on the same network as your windows box, you could just use that for the mysql server, rather than doing it all on windows. Just update the connections below in an appropriate fashion.

Linux Box

  1. Install Mysql server
  2. Install Postgres
  3. Install SQLite
  4. Grab a copy of db_converter.py
  5. Grab a copy of mysql2sqlite.sh.

Actually doing the conversion

Mysql

  1. Download and extract the SDE.
  2. Fire up the SQL server management studio. log in.
  3. Right click on the databases tree item, and select ‘restore backup’. Pick the file option, browse to the file you’ve extracted, select it, then restore it. This post should be of use.
  4. Make sure you have a user in sql server who has rights to the database. I just created one, then set it as the owner of the restored ebs_DATADUMP
  5. Fire up the Mysql Workbench. Select the Migration option.
  6. Go through the wizard. The first database is the SQL server one. The second is the mysql one. Once it’s copied the schema, quit out. It doesn’t (as of 6.0.7) work right for all the data in the tables; it’s also slow.
  7. Fire up opendbcopy
  8. Select the ‘copy data from a source database to a destination database’ plugin.
  9. on plugin configuration, double click the dir() option, and give it a directory. Hit next.
  10. On database connections, fill in everything as needed. breaking my screenshot rule:

 

  1. hit apply and test for each. you’ll need to fix any problems at this stage.
  2. On the source model, select the proper schema. as my sql server user owns the imported database, dbo was appropriate. hit ‘capture source model’ it will take a little time.
  3. capture destination model.
  4. Hit next
  5. hit next again, unless you don’t want to move everything.
  6. hit next again, unless you’ve run into trouble before and want to adjust the mapping
  7. hit next again
  8. hit execute

At this point, you should have a converted mysql database. Export it, and then import to where ever you want to put it. For the SQLite and Postgres conversions, that’ll be onto your linux box, for the next stages.

Postgres

  1. import the converted mysql database onto your linux box.
  2. mysqldump –compatible=postgresql –default-character-set=utf8 -r eve.mysql -u root -p eve
  3. Update the username and database name appropriately in the above command.
  4. ./db_converter.py  eve.mysql eve.postgres
  5. I needed to edit the eve.mysql file in a couple of places, as db_converter didn’t deal right with the tinyints. Just converted them to ints.
  6. create a postgres database to load the postgres version into.
  7.  psql eve -f eve.postgres

SQLite

  1. mysql2sqlite.sh -u root -pMySecretPassWord eve | sqlite3 eve.sqlite

Excel/csv

  1. Run a script (after making sure you have all the modules installed right 😉 )

Individual SQL

  1. Run a script

WTB – CREST market data endpoint

The popularity of sites like Eve-Central should go a long way to proving that the Eve Community wants market data.

Community projects like EMDR should also show that many people, myself included, consider it near vital that we want, if not instant, time delayed market data for use in projecting figures for manufacturing.

Unfortunately, the only way to get such data, in an at all timely fashion, is to cache scrape. At least some members of CCP have stated their disapproval of this method, which lead to outcry the last time the EULA was clarified. Leading to the current state of ‘It’s against the rules, but we’ll only do something about if if you do something else bad’. It also has lead to, with the release of Odyssey 1.1, at least a period of no market data, as the cache format has been updated, which means all the scrapers need to be updated to take care of this.

I’m calling for CCP to take steps to combat this, offering a couple of suggestions on how this could be taken forward. The first, leads to the least change required for the community. The second requires more change from us, but possibly the least from CCP.

Provide an EMDR feed

This would take the most effort from CCP. Have a client that queries TQ and throws market data out onto the EMDR network. Pretty much all the major sites which deal directly with uploads talk with EMDR. So this would lead to an immediate integration with those sites. There’d have to be thought given to the order of how the data is spewed from the firehose, more popular data more frequently, with less popular data, coming less frequently. So Tritanium in Jita would be frequent, while 100mm Imp navy plates in Minmatar faction warfare space might be once a week.

Provide a CREST Market data endpoint

With appropriate caching on the results, this is probably the best long-term solution, as it requires CCP to introduce no new technology to their mix. A pair of endpoints, where you can request the basic order information, or the history, for a typeid in a specific region. Cache it when it’s requested, serve that same data for the next hour. You’ll still have sites like eve central, amalgamating the data for different regions, providing historical analysis, and providing their own APIs for aggregate data for multiple typeids. You’ll probably also have services like EMDR taking feeds from those people querying the api, and providing the data to everyone else on them, reducing the requirement to do each of the direct queries.

CCP could even rate limit it per IP, and the players will take care of the rest.

 

Why

Perfect timely market data is a liability for traders. No exterior market data is more of a liability, as it precludes analysis.

No market data causes third-party developers pain. Kill boards won’t have an isk value to apply to kills. Manufacturers will be stuck updating spreadsheets by hand, which is far from a fun job. This will lead to burn out.

It’d be nice if CCP could do both.

I’m indebted to those individuals that have written the means for us to gather market data. I’m not looking to denigrate your work in any way, shape or form. I’m just not comfortable with the bus factor of such projects.

 

 

Of course, cache scraping is used for other projects too, such as libdogma, a very handy fitting library which can be integrated into websites. Without the ability to get that fitting data, fitting tools are very very difficult to implement.

BOM for SQL server – Guest post

As I don’t use MS SQL server, except when initially pulling the data, the sql elsewhere on my site may not work with it. Especially that for grabbing the bill of materials for blueprints. This is due to little things like there being no built in greater() function, and SQL server doing integer division, when you divide integers.

So, when Cilegon offered me their updated SQL to work with MS SQL server, I said ‘yes please’, so I can pass it on to all you good people. And you not so good people.

 

So here it is. You’ll need to create the function once. The rest is for the BOM.


DROP FUNCTION dbo.greater
GO

CREATE FUNCTION dbo.greater(
@a INT,
@b INT
) RETURNS INT
as
BEGIN
RETURN
CASE WHEN @a >= ISNULL(@b,@a)
THEN @a
ELSE @b
END
END
GO

DECLARE @typeID int = 26374
DECLARE @char_pe smallint = 5
DECLARE @bp_me smallint = 2


WITH cte1_base_mats AS(
SELECT typeID = t.typeID,
name = t.typeName,
m.quantity,
bt.wasteFactor
from invTypes t
inner join invTypeMaterials m on m.materialTypeID = t.typeID
left join invBlueprintTypes bt on bt.productTypeID = m.typeID
where m.typeID=@typeID
union
select typeID = t.typeID,
name = t.typeName,
m.quantity * r.quantity * -1 quantity,
wasteFactor
from invTypes t
inner join invTypeMaterials m on m.materialTypeID = t.typeID
inner join ramTypeRequirements r on r.requiredTypeID = m.typeID
inner join invBlueprintTypes bt on bt.blueprintTypeID = r.typeID
where r.activityID = 1 and bt.productTypeID=@typeID and r.recycle=1
)

SELECT typeID,
name,
quantity = CASE WHEN @bp_me >= 0
THEN ROUND(dbo.greater(0, sum(quantity)) + (dbo.greater(0, sum(quantity)) * ((CAST(wasteFactor AS FLOAT) / (@bp_me + 1)) / 100)),0)
ELSE ROUND(dbo.greater(0, sum(quantity)) + (dbo.greater(0, sum(quantity)) * (CAST(wasteFactor AS FLOAT) / CAST(100 AS FLOAT)) * (1 - @bp_me)),0)
END,
base = 1,
perfect = dbo.greater(0,sum(quantity))
FROM cte1_base_mats
GROUP BY typeID,
name,
wasteFactor
ORDER BY name

SELECT typeID = t.typeID,
name = t.typeName,
quantity = r.quantity * r.damagePerJob,
base = 0,
perfect = r.quantity * r.damagePerJob
FROM ramTypeRequirements r
INNER JOIN invTypes t ON t.typeID = r.requiredTypeID
INNER JOIN invBlueprintTypes bt on bt.blueprintTypeID = r.typeID
INNER JOIN invGroups g on g.groupID = t.groupID
WHERE r.activityID = 1
and bt.productTypeID = @typeID
and g.categoryID != 16