Doing the SDE Conversions

I’m not planning on stopping doing these any time soon, as I need them for my own porpoises, but I thought it might be worth having a few instructions out there, so people can run their own conversions of the EVE SDE. Just  in case I get hit by a bus, or mauled by crazed Cetaceans.

The process is now a /lot/ simpler than it was before. Not as simple as I’d like, but pretty close.

With the first draft of this, there will be no screenshots, and it’s entirely from memory. I may update it when I do the next run.

Environment setup

This should only need to be done the once, rather than each time you want to do the conversion.

On your Windows box:

  1. Install SQL Server Express Edition 2012 with Tools.  Find it here. You want ENU\x64\SQLEXPRWT_x64_ENU.exe or ENU\x86\SQLEXPRWT_x86_ENU.exe
  2. Install Mysql Server. Find it here.
  3. Install the Mysql Workbench. Find it here.
  4. Install openDBCopy. Find it here. You’ll also want a copy of Java 7 JDK installed, and the JAVA_HOME environment variable set to point at it.
  5. drop of copy of the jTDS and mysql jdbc drivers into the opendbcopy.copy plugin lib directory. The path should be something like C:\Program Files\openDBcopy\plugins\opendbcopy.copy\lib
  6. Configure SQL Server to allow for TCP connections. Something like this guide should help.
  7. In mysql, you want to define lower_case_table_names=2 in the my.ini, for [mysqld]. Mine had to go into c:/windows/my.ini This may be useful.
  8. Create a database in mysql for doing the conversions into. Creating a user is good practice too, though you could just use the root user. Make sure the database is set for utf8.

If you’re wanting to do a postgres or sqlite conversion, you’ll want a linux box too. Probably can do it on windows, but I’ve never done it there. If you have a linux box on the same network as your windows box, you could just use that for the mysql server, rather than doing it all on windows. Just update the connections below in an appropriate fashion.

Linux Box

  1. Install Mysql server
  2. Install Postgres
  3. Install SQLite
  4. Grab a copy of db_converter.py
  5. Grab a copy of mysql2sqlite.sh.

Actually doing the conversion

Mysql

  1. Download and extract the SDE.
  2. Fire up the SQL server management studio. log in.
  3. Right click on the databases tree item, and select ‘restore backup’. Pick the file option, browse to the file you’ve extracted, select it, then restore it. This post should be of use.
  4. Make sure you have a user in sql server who has rights to the database. I just created one, then set it as the owner of the restored ebs_DATADUMP
  5. Fire up the Mysql Workbench. Select the Migration option.
  6. Go through the wizard. The first database is the SQL server one. The second is the mysql one. Once it’s copied the schema, quit out. It doesn’t (as of 6.0.7) work right for all the data in the tables; it’s also slow.
  7. Fire up opendbcopy
  8. Select the ‘copy data from a source database to a destination database’ plugin.
  9. on plugin configuration, double click the dir() option, and give it a directory. Hit next.
  10. On database connections, fill in everything as needed. breaking my screenshot rule:

 

  1. hit apply and test for each. you’ll need to fix any problems at this stage.
  2. On the source model, select the proper schema. as my sql server user owns the imported database, dbo was appropriate. hit ‘capture source model’ it will take a little time.
  3. capture destination model.
  4. Hit next
  5. hit next again, unless you don’t want to move everything.
  6. hit next again, unless you’ve run into trouble before and want to adjust the mapping
  7. hit next again
  8. hit execute

At this point, you should have a converted mysql database. Export it, and then import to where ever you want to put it. For the SQLite and Postgres conversions, that’ll be onto your linux box, for the next stages.

Postgres

  1. import the converted mysql database onto your linux box.
  2. mysqldump –compatible=postgresql –default-character-set=utf8 -r eve.mysql -u root -p eve
  3. Update the username and database name appropriately in the above command.
  4. ./db_converter.py  eve.mysql eve.postgres
  5. I needed to edit the eve.mysql file in a couple of places, as db_converter didn’t deal right with the tinyints. Just converted them to ints.
  6. create a postgres database to load the postgres version into.
  7.  psql eve -f eve.postgres

SQLite

  1. mysql2sqlite.sh -u root -pMySecretPassWord eve | sqlite3 eve.sqlite

Excel/csv

  1. Run a script (after making sure you have all the modules installed right 😉 )

Individual SQL

  1. Run a script

WTB – CREST market data endpoint

The popularity of sites like Eve-Central should go a long way to proving that the Eve Community wants market data.

Community projects like EMDR should also show that many people, myself included, consider it near vital that we want, if not instant, time delayed market data for use in projecting figures for manufacturing.

Unfortunately, the only way to get such data, in an at all timely fashion, is to cache scrape. At least some members of CCP have stated their disapproval of this method, which lead to outcry the last time the EULA was clarified. Leading to the current state of ‘It’s against the rules, but we’ll only do something about if if you do something else bad’. It also has lead to, with the release of Odyssey 1.1, at least a period of no market data, as the cache format has been updated, which means all the scrapers need to be updated to take care of this.

I’m calling for CCP to take steps to combat this, offering a couple of suggestions on how this could be taken forward. The first, leads to the least change required for the community. The second requires more change from us, but possibly the least from CCP.

Provide an EMDR feed

This would take the most effort from CCP. Have a client that queries TQ and throws market data out onto the EMDR network. Pretty much all the major sites which deal directly with uploads talk with EMDR. So this would lead to an immediate integration with those sites. There’d have to be thought given to the order of how the data is spewed from the firehose, more popular data more frequently, with less popular data, coming less frequently. So Tritanium in Jita would be frequent, while 100mm Imp navy plates in Minmatar faction warfare space might be once a week.

Provide a CREST Market data endpoint

With appropriate caching on the results, this is probably the best long-term solution, as it requires CCP to introduce no new technology to their mix. A pair of endpoints, where you can request the basic order information, or the history, for a typeid in a specific region. Cache it when it’s requested, serve that same data for the next hour. You’ll still have sites like eve central, amalgamating the data for different regions, providing historical analysis, and providing their own APIs for aggregate data for multiple typeids. You’ll probably also have services like EMDR taking feeds from those people querying the api, and providing the data to everyone else on them, reducing the requirement to do each of the direct queries.

CCP could even rate limit it per IP, and the players will take care of the rest.

 

Why

Perfect timely market data is a liability for traders. No exterior market data is more of a liability, as it precludes analysis.

No market data causes third-party developers pain. Kill boards won’t have an isk value to apply to kills. Manufacturers will be stuck updating spreadsheets by hand, which is far from a fun job. This will lead to burn out.

It’d be nice if CCP could do both.

I’m indebted to those individuals that have written the means for us to gather market data. I’m not looking to denigrate your work in any way, shape or form. I’m just not comfortable with the bus factor of such projects.

 

 

Of course, cache scraping is used for other projects too, such as libdogma, a very handy fitting library which can be integrated into websites. Without the ability to get that fitting data, fitting tools are very very difficult to implement.

BOM for SQL server – Guest post

As I don’t use MS SQL server, except when initially pulling the data, the sql elsewhere on my site may not work with it. Especially that for grabbing the bill of materials for blueprints. This is due to little things like there being no built in greater() function, and SQL server doing integer division, when you divide integers.

So, when Cilegon offered me their updated SQL to work with MS SQL server, I said ‘yes please’, so I can pass it on to all you good people. And you not so good people.

 

So here it is. You’ll need to create the function once. The rest is for the BOM.


DROP FUNCTION dbo.greater
GO

CREATE FUNCTION dbo.greater(
@a INT,
@b INT
) RETURNS INT
as
BEGIN
RETURN
CASE WHEN @a >= ISNULL(@b,@a)
THEN @a
ELSE @b
END
END
GO

DECLARE @typeID int = 26374
DECLARE @char_pe smallint = 5
DECLARE @bp_me smallint = 2


WITH cte1_base_mats AS(
SELECT typeID = t.typeID,
name = t.typeName,
m.quantity,
bt.wasteFactor
from invTypes t
inner join invTypeMaterials m on m.materialTypeID = t.typeID
left join invBlueprintTypes bt on bt.productTypeID = m.typeID
where m.typeID=@typeID
union
select typeID = t.typeID,
name = t.typeName,
m.quantity * r.quantity * -1 quantity,
wasteFactor
from invTypes t
inner join invTypeMaterials m on m.materialTypeID = t.typeID
inner join ramTypeRequirements r on r.requiredTypeID = m.typeID
inner join invBlueprintTypes bt on bt.blueprintTypeID = r.typeID
where r.activityID = 1 and bt.productTypeID=@typeID and r.recycle=1
)

SELECT typeID,
name,
quantity = CASE WHEN @bp_me >= 0
THEN ROUND(dbo.greater(0, sum(quantity)) + (dbo.greater(0, sum(quantity)) * ((CAST(wasteFactor AS FLOAT) / (@bp_me + 1)) / 100)),0)
ELSE ROUND(dbo.greater(0, sum(quantity)) + (dbo.greater(0, sum(quantity)) * (CAST(wasteFactor AS FLOAT) / CAST(100 AS FLOAT)) * (1 - @bp_me)),0)
END,
base = 1,
perfect = dbo.greater(0,sum(quantity))
FROM cte1_base_mats
GROUP BY typeID,
name,
wasteFactor
ORDER BY name

SELECT typeID = t.typeID,
name = t.typeName,
quantity = r.quantity * r.damagePerJob,
base = 0,
perfect = r.quantity * r.damagePerJob
FROM ramTypeRequirements r
INNER JOIN invTypes t ON t.typeID = r.requiredTypeID
INNER JOIN invBlueprintTypes bt on bt.blueprintTypeID = r.typeID
INNER JOIN invGroups g on g.groupID = t.groupID
WHERE r.activityID = 1
and bt.productTypeID = @typeID
and g.categoryID != 16

Ship EHP Evaluator – The making of

As stated in my last post, making this tool was both easier and more difficult than I was expecting. I thought I’d explain that, and shed a little light on a couple of libraries that made this oh so much easier than originally expected.

The libraries in question are LibDogma and PHP-Dogma. These libraries meant that I didn’t have to think about what any modules did. I just tell them ‘This ship, these skills, these modules and rigs’ and I get back an answer saying ‘this hp, these resists’.

This is the core of any fitting tool, and what makes writing fitting tools so much, umm, fun. So many little caveats, the various stacking rules, and so on, and so on. Libdogma takes care of all of it. php-dogma lets me access libdogma from inside a php page.

With that out of the way, it was a straight run home, tying it into the code I’d already written for the evaluator to parse the input. And just a little for calculating and outputting the EHP. That’s the ‘Easier than expected’ bit.

The more difficult than expected comes from the expectations that I had, when I found libdogma. I’m using a Centos 6 server, which comes with gcc 3.4.4. That won’t compile libdogma. So I found an updated rpm and tried that. No dice.

Eventually I downloaded and compiled clang (the latest version) from source. That built libdogma fine, and it passed all the tests.

At which point php-dogma refused to compile. It appears to have been a problem with php 5.3.27, as after an upgrade to 5.4, it all worked fine. And the first time I tried, I’d forgotten to set PKG_CONFIG_PATH, so it didn’t find libdogma. Took me a little time to run that one to ground.

Short sentences, to cover quite a lot of time banging on things, and hoping they’d work.

Anyway, as php-dogma is agpl, I’ve released the code on github under the agpl as well. Short version, if you don’t know the license: Add it to what you want, but if you let people use it, then you have to let them have access to the source, including if it’s as a service. Obviously, IANAL, so you’ll want to look at the license in more detail, if you’re not happy releasing your code.

Ship EHP Evaluator – The Tool

The idea for an update to my scan evaluator tool came up on twitter and it struck me as interesting. Turned out to be both more and less complicated than I thought it might be, but I’ll go into that in a different post, in case you’re interested in the techy details.

The non-techy details are pretty simple. Take a ship scan, paste it into the tool and bang, you get an estimation of how much EHP the ship has.

Now, because I don’t have a huge database of people’s stats (and if I did, I wouldn’t use it, as I wouldn’t have people’s permission for it. So they’d stop giving me the stats), it’s assuming that they have all relevant skills at 5. And the irrelevant ones, but those aren’t relevant. It’s also not giving them any fleet boosts. And there’s the possibility the ship scan missed a tank module, so you didn’t tell the tool about it.

Aside from those caveats, all the test ships I’ve thrown at this have come out with the figures (+/- 1 hp) that pyfa was telling me (aside from one bug that I’ve reported.)

Of course, No warranty is given for people using this for nefarious purposes such as scanning cargo ships and deciding how many ships to commit to attacking them.

Credit is due to Artefact2, without whom I couldn’t have done this (at least, not in the short period of time it took.)

I have some thoughts about functionality to add, such as alerts about open slots. But that’s a future update. This is the initial release.

No warranty is given for any purpose, If your computer explodes when using the tool, it’s all your fault, you should have been nicer to it.

Blueprint calculator update: ISK/HR normalized for 24 hour production runs

This only applies for T2 things with the blueprint calculator.

You’ll find a new row in the Time Calculations box. This is called iskh 24H rounding

This is taking the time taken to manufacture all the runs for the invented blueprint, rounding it up to the next 24 hours (so a 3 hour run takes 24 hours, a 25 hour run takes 48 and so on) and then calculating the isk/hr as if that’s the time taken.

This is so you can have an idea of how much isk/hr you’re making, if you’re only putting jobs in once a day, at the same time. It’s not perfect, of course. If you have a window of several hours, then a 25 hour job probably won’t consume 48 hours of a slot.

But if you look at something like a Hobgoblin II then you can see how it affects your profitability if you’re doing a single run per day.

 

As it’s not relevant for non-limited runs, it’s not added for those.

Modular Assembly Arrays and Labs

We’re not going to get modular POS for a long time. That’s pretty much a given, with what’s been said. It’s just too complicated to modify the POS code to allow it, due to the big ball of mud programming that plagues any large software project. (Just slap new features onto older code, because doing it any other way would take a really long time, relatively)

But, perhaps, we could have a smaller change done, which should require less interaction with the main POS code. I’m making a number of assumptions when I’m talking about this, which may or may not be valid.

There are a number of places where working with POS is somewhat painful. The changes which have happened recently with the removal of the 3km limit have helped a lot, but I’d like things to go a little further.

The main change I’d like is: Change assembly arrays and labs modular. So you can deploy a single array (of each type would be fine) and then expand it. So no more having a POS with 4 advanced labs. You’d have a Lab that you could expand with more lab slots, of the varying types. Probably in packs, to reduce the min/maxing potential.

If you could do it with a single Industry structure, that would be even better. Storage modules, assembly modules for each of the different types, and so on. But I can see the restricted manufacturing slots, and the different speeds of labs being a trifle complicated to manage.

Once that’s all in place, you may be able to expand it further, to include more and more of the functionality of the POS.

Ideally the space factories could be anchored somewhere other than at a POS. Just give it a fuel bay module, and a module for power generation. (Yes, I know the ‘Just’ is making it sound simple. It’s not.) Possibly shield extenders modules, hardeners and so on. Tying them to moons is less than ideal, in my mind. Deep space factories would be neat.

As I’m wanting the moon mining to be shifted off into a totally different structure, to allow for raiding, perhaps once that’s all done, the old POS code could just be retired?

And yes, I know this would get rid of bubble shields. Which some people consider to be very important. If they’re really needed, how about an anchorable structure, such as the warp disruption bubbles? Though I’m a fan of my other suggestion, for having it possible to switch off your warp core, and then be unable to be scanned down. Which would be a similar result, if you’re not unlucky enough to be scanned down at your safe before you switch off.

 

 

 

 

And yes, this all came to mind by me being annoyed about moving materials between assembly arrays. A single storage pool would be nice.

BP Calculator: T2 Invention update

Just another quick update.

The Form Entry page now has another submit button, called ‘Enter list – T2 Invention’. If you use this, it will take a list of T1 blueprints which can be invented from (ignoring anything else) and produce a list of everything you can make, along with a basic price for them. It doesn’t deal with decryptors, so it’s not doing all the work for you. And I’m not going to simplify it to just take a list of blueprints without the rest of the line.

But if you already have a list of blueprints, it’ll help pick out the ones you might want to look at. Just remember, there’s no market volume data. Just price data.

Quick update for the blueprint list

Just a quick update on the blueprint calculator. (Well, It wasn’t that quick to do. But it should be very simple to understand.)

The blueprint list created by the form, https://www.fuzzwork.co.uk/blueprints/enterlist.php, will now show an ISK/hr, an ISK/HR for POS manufacturing (basic arrays only.) and isk/hr for POS, with the datacores cost removed, for -4ME -4 PE blueprints.

It’s not perfect; T2 Ships can’t be made in regular ship arrays. This doesn’t pay attention to that. And R.A.M’s are kinda ignored, due to the rounding to less than 1 unit. But it’s indicative enough, and simple to check up on. All skills are assumed to be 3, when determining the cost of the datacores.

Long term, I’m planning on having it being possible to override the skills, but that probably won’t happen till SSO is available, as I don’t want to go to the trouble of writing a full user management system when something should be coming out some time in the next 6 months or so.

The github is mostly up to date, but I need to add in the details for how to manage the infrastructure needed, like a copy of price data in the database, possibly with a data loader, to pull it from my archived copy.

Updates and future plans

If you were to drop by the blueprint calculator, you’ll have noticed a few changes.

New features

  • Regional prices. You can now pick the region on the price list. It will always  default to Jita for now.
  • Collapsible price list. It will remember the  state you left it in.
  • Parser for a blueprint list. If you open up the S&I interface on the blueprint list (or corp blueprint), then double click on the location you’ll get a window you cut and paste from. Paste it in, that you’ll get a list of links, with a profit per unit.
  • General optimization. Performance since I added the table sorting has suffered. It should now be back up to normal.
  • Added a while back: Decryptors in the invention figures.

Future plans

  • Once the SSO becomes available, you’ll be able to log in, and store blueprints, if you want to. This will be on the server, rather than in the client. I’ll probably leave the option to store with cookies too, if you don’t trust me with that. Other preferences will be stored
  • Once CREST becomes available, with a character sheet end point, your skills (if desired) will autofill.
  • Inclusion of invention prices on the parsed results.
  • Inclusion of isk/hr (ish) on the parsed results
  • Addition of a menu bar, to bounce between applications.

 

 

SSO/CREST:

Don’t worry about the security of SSO. The way OAUTH2 works is:

  • You hit the login button, which sends you to login.eveonline.com with a token that identifies where you’re coming from.
  • You log into Eve online, and pick the character you want to be.
  • You get sent back to the source site, with a long token.
  • The source site sends the token to eve online asking ‘who is this?’
  • Eve responds, identifying who you are and invalidating the token for future use. (CREST works a little differently but not significantly.)

At no time do you give anyone except CCP your password. Any third party site asking for it is either badly coded, or a scam. Ideally you’d log into the forums, and then you won’t need to type in a username or password into any site.