Category Archives: Dev

Google Sheets – Updating from a Menu

Something which has been a problem people have run into, is that google sheets have a problem with automatically updating figures from custom functions. You’ll often find functions which suggest having a useless reference to a cell which you can then use to trigger the updates.

This is obviously somewhat suboptimal. So as a potential solution for you good people, I give you the menu based price list.

Google Sheets allow you to have custom functions, but you can’t write from them, only return data. However, app script called from a menu option can write to its hearts desire. The downside is that you can’t pass it any parameters. So this means you must have the input data stored somewhere it can retrieve by name, and somewhere to output; named range is one option, but I’m taking the lazy option and using sheets with specific names.

So, with the link above, you can see how to do it. There’s an example sheet you can copy.

As a little explanation:

The onOpen function adds menu in. It should be possible to update the data from here, but I ran into a bug there, so it’s purely on refresh.

 

The updatePrices function is pretty much the same as the one for use with a custom function, with a couple of key differences. First, you get the two sheets which are important. Clear the output sheet so we just append to it.

Get the region id from the cell A1 of the typeid sheet.

Now it just runs through the first column, from row 2 (so it ignores the region) to the last row. Deduping the list comes next. Sure you could get the data twice, but this is more efficient.

Then it runs through the typelist 100 ids at a time, getting the json, running through it, and appending the results to the sheet.

Using ESI with Google Sheets

I’ve been meaning to write this post for a while now, so here it finally is; How to use ESI, with google sheets.

Now, the unauthenticated endpoints are relatively easy. You’ll still need to write a little code for them, or steal it from somewhere else, but other than that, it should just work. Authenticated endpoints are another matter.

ESI uses Eve’s SSO for authentication, which means, at some point during the setup, you’ll need to handle the OAuth flow to get what’s known as a refresh token. This token is much like an XML key from the older API, allowing you to access the endpoint without any human intervention. (It’s actually used to get the access token, which allows you to do things for about 20 minutes. The refresh token can be used at will to create new access tokens)

Setup

Due to the way it works, this means you’re going to have to go to the developers site, and create a new application. Each application has a client id, and a secret, and that secret should be held closely and not revealed to the general public. So you need to create your own.

So, go to the site, log in, and click on ‘application’ up at the top. You can then click on ‘create new application’.

Give it a name and a description. I’d suggest something meaningful to you for the name, but the description isn’t important; you just need something there. Pick ‘Authentication & API Access‘ as the type.

Now you need permissions. These are what are known as ‘scopes’ in OAuth. They’re like the various things you can pick in the XML api, allowing access to various different things. What scopes you pick, is dependent on what you actually want to do with the login. When actually doing the login, you tell it which scopes you want. As long as they exist in the application definition you’re filling in now, you can get them. You don’t need to ask for them all.

For this example, I’m asking for the esi-universe.read_structures.v1 and esi-markets.structure_markets.v1 scopes. This allows me to pull the market data from a structure I have access to (and get its name). Just click them, and they should move over to the box on the right.

Now comes the ‘complicated’ bit. The callback url. When you use OAuth, you get sent to the login server, log in there, then get redirected back to the source. The callback url is where you’re sent. As you don’t have a webserver for this (probably) we’re going to use a tool called Postman to handle this step.

Fill in https://www.getpostman.com/oauth2/callback as the callback url. Click on ‘create new application’.

Finally, click on ‘view application’ next to your newly created one.

Postman

Postman is a chrome app which is designed for working with APIs. We’ll be using it to handle the initial authentication. Get it from its site. Once you’ve installed it, run it.

It should look something like this. On the authorization tab, change the drop down to ‘OAuth 2.0’. Then click the ‘get new access token’ button.

Auth URL: https://login.eveonline.com/oauth/authorize

Access Token URL: https://login.eveonline.com/oauth/token

Client ID: The ID from the application page you just created

Client Secret: the Secret Key from the application page you just created

Scope: the scopes you want, separated by spaces. So in this case:

esi-universe.read_structures.v1 esi-markets.structure_markets.v1

Finally, hit ‘request token’. This should send you to the login page for Eve. If it doesn’t work, make sure you have no extra spaces at the beginning or end of the various options.

Once that’s done, and you’re authorized, you should see a token in the existing tokens box.

This lists your access token (you can ignore this, tbh. you’ll be getting a new one.), and your refresh token (this is important, so take a note of it, along with your client id and your secret.)

That’s you finished with Postman, until you want a new token for something else.

Google Sheets

Google Sheets doesn’t handle JSON or OAuth natively. This is a bit of a pain, but the ability to add custom functions takes care of that nicely. Especially when someone else writes them for you 😉

For this example, we’ll create a new workbook.

On your new workbook, create a new sheet and call it config. Put the client id, the secret, and the refresh token into separate cells on it.

On the Data menu, pick ‘Named Ranges…’. This will open a sidebar. select the clientid cell, and click ‘add a range’. call it clientid, and hit done. Repeat with the secret, and the refresh token, calling them ‘secret’ and ‘refresh’.

you can then close the sidebar. This is just to make it easy to refer to them later.

Now, click on Tools->Script editor. This will open a new window. switch to it.

Delete the contents of Code.gs (it should be an example myFunction only. if not, you’re in uncharted waters)

Paste in the functions from my Github. I’d explain them all, but that would massively balloon this post, and you’d need to understand Javascript. (They’re relatively simple)

Finally, on a sheet without data you care about (as this is a new workbook, sheet 1 works) use

=getCitadel(1023164547009)

 

and it should (eventually) populate with all the orders from that citadel. get the citadel id from a bookmark, asset id, or market order.

 

I need to do more work on this, as it’s not storing the access token anywhere (there’s an issue with the sheet doing updates on cells from custom functions)

Ore compression calculator

I’ve updated the compression calculator with a couple of new features, and stomped out a bug.

The bug was it had the wrong volume for some ores. (oops. it was hooked up to an old database.) That’s now sorted, and hooked up to the DB which updates when a new version comes out.

New Features:

Output values: You are told how many of each mineral you should get out in the end.

Ore selection: you can choose between all ores (the default) and highsec ores only.

 

Are other options for ore selection useful to you? Would you like to be able to specify nullsec only? Lowsec only? Be able to pick and choose, with max limits for certain kinds? So only 15 compressed spod, no arkanor but as much of others you want? Presets are simple from an interface perspective, full customization is somewhat more painful.

The backend, now that I’ve re-familiarized myself with it, is simple enough to update.

As always, the code is on my github respository and is can be freely used. However, it does require an external third-party library, which makes it a bit more painful to install. Especially if you’re on shared hosting (probably impossible then)

EDIT:

Nullsec only, and lowsec/nullsec (for the rig) filter added.

Bug stomped wrt Dark Ochre

market.fuzzwork.co.uk

With the introduction (a wee while ago) of the new eve market data endpoint, and the problems which surfaced at Eve Central while that was happening, I decided to create my own market data website. And so, I present market.fuzzwork.co.uk

I posted it was out a few days ago, but thought I should write a little more about it, and the process of creating it.

Background

At fanfest, in one of the round tables, I brought up (again) the idea of an endpoint to get the order book for an entire region at once. While most people just want aggregates, that’s not something that would be easy for CCP to provide for multiple items, at the same time. In part because the aggregation takes time, and because the multiple items makes caching that data a lot harder (different people asking for different lists of things). Lo and behold, a short time later, such an endpoint came into being. It’s paginated into batches of 30,000 items, which is enough for most regions, though The Forge has 9 pages.

Details

So I rented a new server (with an offshoot of OVH called kimsufi) and set about building one. It’s a PHP site, running on an NGINX server, with a PostgreSQL database, and Redis for caching the aggregates.

The downloading script I wrote in python, and after an abortive attempts at aggregating the data, first in the database itself, then breaking it out into the script to process, I settled on using Pandas to do the aggregation. The script version would have taken over an hour to process. The Pandas version runs in a few minutes. This lets me do the grabbing of data once every 30 minutes, retaining it for a couple of days; that means you can look at snapshots of an order over that timeframe, to see how it changes.

That retention brought problems of its own. Not so much in keeping the data (each grab adds a couple of hundred meg of data and indexes.) but cleaning it up. Or being specific, the effects on the import, of having deleted it. Turns out the database doesn’t like it when you’re inserting and deleting 1.7 million rows every thirty minutes. It’s down to how it stores it. I won’t get into technical details, but it went from a couple of minutes, to over 15. which impacted kind of negatively on the performance of the site. The process wasn’t taking much CPU time, but it completely pegged the disk activity at 100%, and led to the site timing out. Not good.

How to solve this issue? One way would be to get a server with SSDs. Unfortunately, these are kind of expensive, relative to the one I’m using. I’m not made of money, after all. So I put together a Patreon campaign. If you want to contribute to the upkeep of the servers, I’d appreciate a pledge, however small it is. (Small is good in fact. My expenses aren’t that high. I’d feel bad about offloading everything to someone else)

However, a thought came to me recently. I’m using Postgres, and it can partition data, based on specific conditions. So I can have the data being stored in a partition based on the order set, and just truncate out partitions as they age out. This is far more efficient than deletion, and shouldn’t impact on the import speed. It’s not fully tested yet, but it’s looking somewhat better already. It’ll increase my data retention a bit (up to 200 samples (4 days ish), rather than the 96  (2 days) I was planning) but space isn’t too much of a concern that way. And partitions allow for more efficient queries.

Future

I still need to write the API interface for it, so you can request an arbitrary number of aggregate/region combinations, but that isn’t too far out. And I’ll provide a sample method for importing into google. In addition, I’m providing the downloaded orderbook as a gzipped csv, for an undetermined period of time (it’s all dependant on space, tbh. and zipped they’re not huge)

I also need to decide how I’m going to winnow the aggregates. The API will only allow for ‘live’ data, as that’s all I’m keeping in redis, but all the data is also being kept in a database table. I’ll likely add a page so you can see how they change over time. To keep the table from becoming an absolute monster, I’m thinking I’ll keep only a week at 30 minute intervals, then averages for the day, if you want to see over a longer time frame.

In case you’re interested, I’m generating a weighted average (price* volume summed up, then divided by the total volume), the max, the min, and a weighted average of the lowest/highest 5% of the orders (lowest for sell, highest for buy) ignoring any price more than 100 times the lowest/highest price. It’ll help when someone puts in a buy order for a million Gilas, at 0.01 isk. Which otherwise completely screws with the average prices.

Other

Oh, and I put together a very simple fleet tracker. Doesn’t do anything other than show you where fleet members are, and what they’re flying. Needs a CREST privilege, and a link you get from the fleet window. And for you to be fleet boss. I may expand it a little in the future. Maybe for invites too. Though that would require me to store the access and refresh token somewhere other than the user’s session.

As normal, all my code is on my github. Might be a little behind the release, but that’s purely down to me not committing it yet.

I’m open to suggestions, if you have them.

Anyway, Steve signing out.

New SDE Migration process

CCP Tellus is a wonderful person.

Until recently, the SDE (Static Data Extract) was a mix of YAML, an MS SQL server backup, and an SQLite Database for universe data. This was a bit of a pain to convert, with a whole bunch of steps.

Now, the SDE is being provided as a bunch of YAML files. Now, these, by themselves, aren’t a hugely useful format. But a single format makes it easier to write a conversion routine. I can convert into several formats, by running a single script (now that I’ve written it) This makes me a very happy man.

https://github.com/fuzzysteve/yamlloader is the repository for the new script. It’s written to convert into mysql, sqlite, postgres, and postgres in a specific schema. And it’s pretty simple to extend to other formats (MS SQL is a possibility, but as I can’t run that on my linux server, I’m not doing it at the moment)

In addition, with the SDE being in YAML, it’s a lot easier to generate the differences between versions. I’ve loaded it into a git repository. This is publicly available. git://www.fuzzwork.co.uk/sde-tq.git

Feel free to look into it 🙂

 

So, short version: I have a new SDE migration process. It may lead to slight differences in output, but the process is a lot easier for me. And if I get hit by a bus, it’s a lot easier for someone else to take over.

Using Crest with Excel: Crest Strikes Back

If you’ve read my older post on it, using Crest with Excel can be a complete pain in the ass. You have to load in a module, then write something in VBA, using it, because VBA doesn’t support json natively, and doing it with a user defined function, well, I never got that far, before throwing my hands up, and walking away.

Now, I got that working, but I didn’t like it. I ran into power query for a while, and it kinda worked, but I didn’t find a way to easily get it to pull a bunch of data, and reprocess it the way I wanted. So not so good for CREST prices.

So I was still on the lookout. And I recently came across xlwings, and it made me very happy indeed. I like Python, and I’m fairly good with it now. So, combining the two seemed like a winner.

After a little playing with it, I got it all working, and I now have an excel sheet, which does CREST lookups, and all it needs, is python, xlwings, and requests installed. So, to the instructions:

Install python. I’ve been using 2.7.10, but 3 should work just fine too. Make sure it’s in your path.

Install pywin32 (I’m using this one, for 2.7, 32 bit)

Using Pip, install xlwings. (you’ll do this in a command prompt)

[where you installed python, like c:\python27]\scripts\pip install xlwings

Using pip, install requests.

[where you installed python, like c:\python27]\scripts\pip install requests

using pip, install requests-security, to stop it bugging you

[where you installed python, like c:\python27]\scripts\pip install requests[security]

Now, go to a directory where you want to create your new workbook, using your command prompt. run the following command (myneatcrestbook, is the name of your new workbook. change it now if you want.):

xlwings quickstart myneatcrestbook

If this fails, it’s probably because your python scripts directory isn’t in your path. So try

[where you installed python, like c:\python27]\scripts\xlwings quickstart myneatcrestbook

With that done, you’ll find a new directory, containing an excel sheet, and a python file. open the excel sheet, and enable macros (it’ll probably just prompt you). You’ll also want to turn on the developers ribbon, in the file->options->customize ribbon option.

On the developers ribbon, click ‘macro security’, and check the ‘trust access to the VBA project object model’. hit ok.

Then click the ‘excel add-ins’ button, and click browse. Browse to [where python is]\Lib\site-packages\xlwings, and select the xlwings.xlam file. hit open, then check the box next to xlwings.

You should now have an xlwings ribbon entry.

 

That done, you can add code to the python file, come back to excel, and hit ‘import python UDFs’, to make them live.

When you want it to reprocess your functions (say, to update the data again, and get new prices) use ctrl+alt+F9

Below you’ll find a copy of a simple script to get the 5% average sell price. called with =crestSellPrices(34,10000002)

from xlwings import xlfunc

import requests

@xlfunc
def crestSellPrices(typeid,regionid):
    response=requests.get('https://public-crest.eveonline.com/market/{}/orders/sell/?type=https://public-crest.eveonline.com/types/{}/'.format(int(regionid),int(typeid)))
    data=response.json()
    count=data['totalCount']
    numberOfSellItems=0
    sellPrice=dict()
    for order in data['items']:
        sellPrice[order['price']]=sellPrice.get(order['price'],0)+order['volume']
        numberOfSellItems+=order['volume']
    # generate statistics
    if numberOfSellItems:
        prices=sorted(sellPrice.keys())
        fivePercent=max(numberOfSellItems/20,1)
        bought=0
        boughtPrice=0
        while bought<fivePercent:
            fivePercentPrice=prices.pop(0)
            if fivePercent > bought+sellPrice[fivePercentPrice]:
                boughtPrice+=sellPrice[fivePercentPrice]*fivePercentPrice;
                bought+=sellPrice[fivePercentPrice]
            else:
                diff=fivePercent-bought
                boughtPrice+=fivePercentPrice*diff
                bought=fivePercent
        averageSellPrice=boughtPrice/bought
    return averageSellPrice

CREST market data

This has been out for a while, but I’ve now got something public ( https://www.fuzzwork.co.uk/market/viewer2/ ) out there, I thought I’d write some words.

Of all the things which have happened in my CSM term, the availability of ‘live’ market data is one of the ones I’m happiest about. It might not seem like a big step, but we now have access to data directly from Tranquility, in a way that has a very low cache time. (I’m expecting some future ones to be even smaller, but that’s more specific data) It’s been a long time coming, and is a step towards removing the ‘requirement’ for the grey area of cache scraping. (Along with the dgmExpressions table being added to the SDE). Grey areas are bad, because they lead people to push the boundaries, and I don’t want to see someone banned for that.

But enough electioneering (Vote for me! 🙂 ), I thought I’d write a little about what it’s doing. I had help in the form of Capt Out’s Crest Explorer which gave me a head start, rather than needing to implement the core ‘Log into the SSO’ bit. After that, it’s just a matter of:

  • Grab the region list. (filtering out the Wormholes)
  • Grab the base market groups.
  • Wait for the user to click something
    • If it’s a market group, grab that specific group, and loop through it for direct children
    • If it’s an item, if a region is selected, grab buy and sell orders, before cramming them into tables
    • If it’s a region, grab the region data for where to get buy and sell order data

That’s pretty much it. There are some twiddly bits (like languages) but the rest of it is just AJAX calls, and looping through the result sets for display. I’d hope the code is fairly simple to understand (if you know any javascript/jquery) but if you have any questions, feel free to throw them my way.

https://github.com/fuzzysteve/CREST-Market-Viewer for the source. MIT licensed.