Category Archives: Dev

Updated Blueprint Details

I’ve been a trifle remiss in posting of late, especially with the big changes which have come out lately. Sorry about that.

However, I have kept on top of getting the SDE conversions out, including the blueprint details, yanked out of the new blueprints.yaml file, and shoved into appropriate database tables. The good news is, they’re a lot easier to work with than the old versions. The not so good news is, I’ve not been back porting them into MS SQL server, so that’s a step you’ll have to manage yourself (Should be pretty easy, tbh. If you have trouble, give me a yell, and I’ll write something up)

The Tables:

ramActivities

This isn’t actually a new table. it’s just handy for knowing which activity id is which actual activity. such as 1 is manufacturing.

industryActivity

If you need to know how long it takes to do something, this is your table. Blueprint type id,  activityid, time in seconds to do it. This is a time unmodified by TE, so you’ll be reducing it.

industryActivityMaterials

This is the core table for most of you good folk. blueprint type id, activity id, material id, quantity, and if it’s consumed. With the changes coming in phoebe, consume is deprecated. I’m leaving it in, set to 1 for everything, so things won’t break, but it’ll go away in time.

industryActivityProbabilities

For invention and reverse engineering. No longer do you need to create a table, CCP provide all the information.

industryActivityProducts

The outputs from a blueprints, with its various activities. Handy, when you want to be able to determine what is invented from what. Also includes the number of items created in a batch, or the number of runs from invention and reverse engineering.

industryActivityRaces

This one will be going away. Prephoebe, it let you know what you could reverse engineer from what, race wise. That’s no longer needed.

industryActivitySkills

The skills to do each activity.

industryBlueprints

I needed somewhere to keep the maxProductionLimit value. That’s what this table is for.

 

As you can see, everything is a lot simpler to query now, everything tied to the blueprint type id, rather than to the final product. In the examples below, I’m using expanded Cargohold IIs. 1320 is the blueprint, 1319 is the actual thing.

[lang=sql]
select typeName,materialTypeID,quantity
from industryActivityMaterials iam
join invTypes on (iam.materialTypeID=invTypes.typeID)
where activityid=1 and iam.typeid=1320
[/lang]
To make something.

[lang=sql]
select typename,materialTypeID,iam.quantity
from industryActivityMaterials iam
join industryActivityProducts iap on (iap.typeid=iam.typeid)
join invTypes on (iam.materialTypeID=invTypes.typeID)
where iam.activityid=8 and iap.activityid=8 and iap.producttypeid=1320
[/lang]
To invent the blueprint to make them. This gives the materials for the invention, but not the required blueprint. To get that, you could just pull iam.typeid.

 

 

The formulas for using all these figures have been drawn together by Qoi, in this forum post https://forums.eveonline.com/default.aspx?g=posts&t=362493 which is handy. The core one is:

quantity after adjustment= max(runs,ceil(round((quantity(1-(ME/100))(1/(100-facility bonus))(1-(team bonus 1/100))(1-(team bonus 2/100))),2)))

The round, in case you’re curious, is there to eliminate a pesky floating point rounding error, which leads to some materials being 1 too high.

Google Docs, the XML API, and the nightmare of importXML

I loathe importXML. Truly despise it.  Sure, it works for some things, but the moment that you want to get more than a single piece of XML data back from each call, it becomes a problem.

Sure, you can have an importxml call which has multiple attributes in the xpath entry, but that depends on the site never changing the order that comes back. Which isn’t the world’s best idea.

There is, however, a solution. It’s one that I’ve been looking at recently, when Mynnna linked a forum post to me about someone loading asset information into google docs. It’s a neat solution, especially because the asset api is a more complicated one, with nesting (Assets can exist within Assets).

That got me looking at other APIs, and led to the creation of a new github repository where I’m keeping the code of some new custom functions for loading data. I’m not using them myself, but that’s because I’m a bit of a database snob. I can do things there which would be an absolute PITA to do in a spreadsheet. However, I am a code junkie, so this lets me scratch that itch (I can do this better! Tada!)

 

I thought I’d explain a little of what goes into these functions, basing it off the simplest of them. The outpost loader function.

function loadOutposts(){
  var outposts= new Array();
  var url = "https://api.eveonline.com/eve/ConquerableStationList.xml.aspx";
  var parameters = {method : "get", payload : ""};
  var xmlFeed = UrlFetchApp.fetch(url, parameters).getContentText();
  var xml = XmlService.parse(xmlFeed);
  if(xml) {
    var rows=xml.getRootElement().getChild("result").getChild("rowset").getChildren("row");
    for(var i = 0; i < rows.length; i++) {
      outpost=[rows[i].getAttribute("stationID").getValue(),
                 rows[i].getAttribute("stationName").getValue(),
                 rows[i].getAttribute("stationTypeID").getValue(),
                 rows[i].getAttribute("solarSystemID").getValue(),
                 rows[i].getAttribute("corporationID").getValue(),
                 rows[i].getAttribute("corporationName").getValue()
                 ]
      outposts.push(outpost);
    }
  }
  return outposts;
}

If you want to have a function that returns a bunch of rows and columns, you have to build an array of data, and return that.

An array is a simple idea. It’s way of bundling up information. A single dimensional array is a list. A two-dimensional array is a table (like a spreadsheet). You can have more levels, which become somewhat harder to visualise (3 dimensional: a table where a cell can contain lists of data. and so on)

So you have to take the XML data, and format it into a 2 dimensional array. Thankfully, this works easily. Each row in the API, becomes a row in the array. You build a list, and add it to the bottom at the total array (known as a push)

Step one: get the XML. that’s what the UrlFetchApp.fetch line does. At this point, you just have a bunch of text, which happens to be XML.
Step two: turn the text into a data structure your code can deal with. That’s the XmlService.parse line. You now have a data structure that your code can access, and loop through.
Step three: Make sure it was actually XML you had. If the parser couldn’t handle it, it’d have made the xml variable false, so the if (xml) test would fail.
Step four: loop through it. We don’t care about anything but the rows, so we burrow down to that level, (the getChild and getChildren line). With that split off, we now have an array (a 1 dimensional list) of rows. The for loop iterates through that, with a counter that points to each entry. The address of the entry/
Step five: Create a list of data about each outpost. Because we’re working with a proper XML engine here, it’s case sensitive on attribute names, unlike importXML.
Step six: push the list of data about a specific outpost onto the bottom of the list of outposts
Step seven: Repeat steps five and six until it’s finished.
Step eight: Hand the array back, with a return statement.

That’s pretty much the core of all the functions. Some are a little more complex, because they’re turning typeids into typenames and so on. That’s done by generating an array where the address is the typeid, and the other entry is the name. You can see an example of that in the MarketOrders function, for stations and typeids. If you’re wondering why I’m adding a _ to it, it’s to make sure it’s treated as text, so no funny business with number comparison or arrays with holes happens.

For the Market Orders function, you’ll also notice it requires the moment.js library loaded. This is because the scripting engine behind google sheets handles dates very badly. moment.js makes it far easier to do something like add 90 days to a date.

Anyway, the required steps to get one of the scripts working (excluding anything like the moment.js library, and additional sheets, mentioned in the comments for specific scripts)

 

  • open up a new, or existing sheet.
  • Tools menu -> Script editor
  • If you haven’t done this before, for this sheet, it’ll ask you what you want to create a script for. pick ‘Blank Project’
  • Delete the text which it shows ( the function myFunction() { } stuff. )
  • paste in the text for the function of mine you want to add.
  • As long as there aren’t other specific steps, you’re now done. The Eve Central and outpost scripts don’t need anything else.

At this point, you can just use the functions in your regular sheets 🙂

 

If there are any APIs you can’t get working, give me a yell, and I may be able to help.

Excel and the JSON CREST feeds

More and more things are being put into EVE’s new public CREST endpoints.

For people who do industry, the most important 2 which are now there, are Cost Indexes, and adjusted prices.

Now, I’m not much of one for using spreadsheets myself, preferring to code a custom solution, but I know there’s a large group of people who won’t live without them.

Lockefox has put together a solution for googledocs. I’ve not used it, but it probably works well. If you need it, go take a look.

That leaves Excel. It used to be that working with things in Excel was easier, as it had better handling for loading entire XML sheets, rather than single cells. Unfortunately, it has no native JSON support. Even if it did, it’s nested data, which makes it somewhat harder to pull out.

Looking at the cost indexes, first you have an array of solar systems, and within each of those, you have an array of cost indexes. So you have to manually flatten that 3d data structure, into a 2d one.

I spent some time working on this, and came up with: costIndexes

It’s a macro enabled Excel workbook built in excel 2010, which loads in the cost indexes. I leave price data as an exercise for the reader. It takes a while to work but that’s because it’s processing a relatively large file, and it probably isn’t the most efficient way of doing it.

Now, to explain how it works, it’ll be easier if you open up Visual basic, from the development ribbon (file->options->customize ribbon, tick the development tab)

It’s including 3 files from http://www.ediy.co.nz/vbjson-json-parser-library-in-vb6-xidc55680.html. The JSON modules, the cJSONScript and cStringBuilder class modules

Then you have the following subroutine in the source for the workbook. Finally, you have a button which calls the Test() subroutine. All in all, it’s actually pretty simple (because the json parser makes it easy)

Long term, there are probably better ways to do it, using .net or similar. But it works.

Sub Test()
    Dim oHttp As Object
    Dim jsonText As String
    Dim jsonObj As Dictionary
    Dim jsonRows As Collection
    Dim jsonRow As Dictionary
    Dim costInex As Collection
    Dim ws As Worksheet
    Dim currentRow As Long
    Dim startColumn As Long
    Dim i As Long
    
    Set oHttp = CreateObject("MSXML2.XMLHTTP")
    If Err.Number <> 0 Then
        Set oHttp = CreateObject("MSXML.XMLHTTPRequest")
        MsgBox "Error 0 has occured while creating a MSXML.XMLHTTPRequest object"
    End If
    On Error GoTo 0
    If oHttp Is Nothing Then
        MsgBox "For some reason I wasn't able to make a MSXML2.XMLHTTP object"
        Exit Sub
    End If
    
    oHttp.Open "GET", "https://public-crest.eveonline.com/industry/systems/", False
    oHttp.Send
    
    'Create a real JSON object
    jsonText = oHttp.responseText
    
    Set ws = Worksheets("CostIndexes")
    
    
    'Parse it
    Set jsonObj = JSON.parse(jsonText)
    
    'Get the rows collection
    Set jsonRows = jsonObj("items")
    
    'Set the starting row where to put the values
    currentRow = 1
    
    'First column where to put the values
    startColumn = 1 'A
    
    'Loop through all the values received
    For Each jsonRow In jsonRows
        currentRow = currentRow + 1
        ws.Cells(currentRow, startColumn).Value = jsonRow("solarSystem")("id")
        For Each costIndex In jsonRow("systemCostIndices")
            ws.Cells(currentRow, costIndex("activityID") + 1).Value = costIndex("costIndex")
        Next costIndex
    Next jsonRow
End Sub

Doing the SDE Conversions

I’m not planning on stopping doing these any time soon, as I need them for my own porpoises, but I thought it might be worth having a few instructions out there, so people can run their own conversions of the EVE SDE. Just  in case I get hit by a bus, or mauled by crazed Cetaceans.

The process is now a /lot/ simpler than it was before. Not as simple as I’d like, but pretty close.

With the first draft of this, there will be no screenshots, and it’s entirely from memory. I may update it when I do the next run.

Environment setup

This should only need to be done the once, rather than each time you want to do the conversion.

On your Windows box:

  1. Install SQL Server Express Edition 2012 with Tools.  Find it here. You want ENU\x64\SQLEXPRWT_x64_ENU.exe or ENU\x86\SQLEXPRWT_x86_ENU.exe
  2. Install Mysql Server. Find it here.
  3. Install the Mysql Workbench. Find it here.
  4. Install openDBCopy. Find it here. You’ll also want a copy of Java 7 JDK installed, and the JAVA_HOME environment variable set to point at it.
  5. drop of copy of the jTDS and mysql jdbc drivers into the opendbcopy.copy plugin lib directory. The path should be something like C:\Program Files\openDBcopy\plugins\opendbcopy.copy\lib
  6. Configure SQL Server to allow for TCP connections. Something like this guide should help.
  7. In mysql, you want to define lower_case_table_names=2 in the my.ini, for [mysqld]. Mine had to go into c:/windows/my.ini This may be useful.
  8. Create a database in mysql for doing the conversions into. Creating a user is good practice too, though you could just use the root user. Make sure the database is set for utf8.

If you’re wanting to do a postgres or sqlite conversion, you’ll want a linux box too. Probably can do it on windows, but I’ve never done it there. If you have a linux box on the same network as your windows box, you could just use that for the mysql server, rather than doing it all on windows. Just update the connections below in an appropriate fashion.

Linux Box

  1. Install Mysql server
  2. Install Postgres
  3. Install SQLite
  4. Grab a copy of db_converter.py
  5. Grab a copy of mysql2sqlite.sh.

Actually doing the conversion

Mysql

  1. Download and extract the SDE.
  2. Fire up the SQL server management studio. log in.
  3. Right click on the databases tree item, and select ‘restore backup’. Pick the file option, browse to the file you’ve extracted, select it, then restore it. This post should be of use.
  4. Make sure you have a user in sql server who has rights to the database. I just created one, then set it as the owner of the restored ebs_DATADUMP
  5. Fire up the Mysql Workbench. Select the Migration option.
  6. Go through the wizard. The first database is the SQL server one. The second is the mysql one. Once it’s copied the schema, quit out. It doesn’t (as of 6.0.7) work right for all the data in the tables; it’s also slow.
  7. Fire up opendbcopy
  8. Select the ‘copy data from a source database to a destination database’ plugin.
  9. on plugin configuration, double click the dir() option, and give it a directory. Hit next.
  10. On database connections, fill in everything as needed. breaking my screenshot rule:

 

  1. hit apply and test for each. you’ll need to fix any problems at this stage.
  2. On the source model, select the proper schema. as my sql server user owns the imported database, dbo was appropriate. hit ‘capture source model’ it will take a little time.
  3. capture destination model.
  4. Hit next
  5. hit next again, unless you don’t want to move everything.
  6. hit next again, unless you’ve run into trouble before and want to adjust the mapping
  7. hit next again
  8. hit execute

At this point, you should have a converted mysql database. Export it, and then import to where ever you want to put it. For the SQLite and Postgres conversions, that’ll be onto your linux box, for the next stages.

Postgres

  1. import the converted mysql database onto your linux box.
  2. mysqldump –compatible=postgresql –default-character-set=utf8 -r eve.mysql -u root -p eve
  3. Update the username and database name appropriately in the above command.
  4. ./db_converter.py  eve.mysql eve.postgres
  5. I needed to edit the eve.mysql file in a couple of places, as db_converter didn’t deal right with the tinyints. Just converted them to ints.
  6. create a postgres database to load the postgres version into.
  7.  psql eve -f eve.postgres

SQLite

  1. mysql2sqlite.sh -u root -pMySecretPassWord eve | sqlite3 eve.sqlite

Excel/csv

  1. Run a script (after making sure you have all the modules installed right 😉 )

Individual SQL

  1. Run a script

WTB – CREST market data endpoint

The popularity of sites like Eve-Central should go a long way to proving that the Eve Community wants market data.

Community projects like EMDR should also show that many people, myself included, consider it near vital that we want, if not instant, time delayed market data for use in projecting figures for manufacturing.

Unfortunately, the only way to get such data, in an at all timely fashion, is to cache scrape. At least some members of CCP have stated their disapproval of this method, which lead to outcry the last time the EULA was clarified. Leading to the current state of ‘It’s against the rules, but we’ll only do something about if if you do something else bad’. It also has lead to, with the release of Odyssey 1.1, at least a period of no market data, as the cache format has been updated, which means all the scrapers need to be updated to take care of this.

I’m calling for CCP to take steps to combat this, offering a couple of suggestions on how this could be taken forward. The first, leads to the least change required for the community. The second requires more change from us, but possibly the least from CCP.

Provide an EMDR feed

This would take the most effort from CCP. Have a client that queries TQ and throws market data out onto the EMDR network. Pretty much all the major sites which deal directly with uploads talk with EMDR. So this would lead to an immediate integration with those sites. There’d have to be thought given to the order of how the data is spewed from the firehose, more popular data more frequently, with less popular data, coming less frequently. So Tritanium in Jita would be frequent, while 100mm Imp navy plates in Minmatar faction warfare space might be once a week.

Provide a CREST Market data endpoint

With appropriate caching on the results, this is probably the best long-term solution, as it requires CCP to introduce no new technology to their mix. A pair of endpoints, where you can request the basic order information, or the history, for a typeid in a specific region. Cache it when it’s requested, serve that same data for the next hour. You’ll still have sites like eve central, amalgamating the data for different regions, providing historical analysis, and providing their own APIs for aggregate data for multiple typeids. You’ll probably also have services like EMDR taking feeds from those people querying the api, and providing the data to everyone else on them, reducing the requirement to do each of the direct queries.

CCP could even rate limit it per IP, and the players will take care of the rest.

 

Why

Perfect timely market data is a liability for traders. No exterior market data is more of a liability, as it precludes analysis.

No market data causes third-party developers pain. Kill boards won’t have an isk value to apply to kills. Manufacturers will be stuck updating spreadsheets by hand, which is far from a fun job. This will lead to burn out.

It’d be nice if CCP could do both.

I’m indebted to those individuals that have written the means for us to gather market data. I’m not looking to denigrate your work in any way, shape or form. I’m just not comfortable with the bus factor of such projects.

 

 

Of course, cache scraping is used for other projects too, such as libdogma, a very handy fitting library which can be integrated into websites. Without the ability to get that fitting data, fitting tools are very very difficult to implement.

BOM for SQL server – Guest post

As I don’t use MS SQL server, except when initially pulling the data, the sql elsewhere on my site may not work with it. Especially that for grabbing the bill of materials for blueprints. This is due to little things like there being no built in greater() function, and SQL server doing integer division, when you divide integers.

So, when Cilegon offered me their updated SQL to work with MS SQL server, I said ‘yes please’, so I can pass it on to all you good people. And you not so good people.

 

So here it is. You’ll need to create the function once. The rest is for the BOM.


DROP FUNCTION dbo.greater
GO

CREATE FUNCTION dbo.greater(
@a INT,
@b INT
) RETURNS INT
as
BEGIN
RETURN
CASE WHEN @a >= ISNULL(@b,@a)
THEN @a
ELSE @b
END
END
GO

DECLARE @typeID int = 26374
DECLARE @char_pe smallint = 5
DECLARE @bp_me smallint = 2


WITH cte1_base_mats AS(
SELECT typeID = t.typeID,
name = t.typeName,
m.quantity,
bt.wasteFactor
from invTypes t
inner join invTypeMaterials m on m.materialTypeID = t.typeID
left join invBlueprintTypes bt on bt.productTypeID = m.typeID
where m.typeID=@typeID
union
select typeID = t.typeID,
name = t.typeName,
m.quantity * r.quantity * -1 quantity,
wasteFactor
from invTypes t
inner join invTypeMaterials m on m.materialTypeID = t.typeID
inner join ramTypeRequirements r on r.requiredTypeID = m.typeID
inner join invBlueprintTypes bt on bt.blueprintTypeID = r.typeID
where r.activityID = 1 and bt.productTypeID=@typeID and r.recycle=1
)

SELECT typeID,
name,
quantity = CASE WHEN @bp_me >= 0
THEN ROUND(dbo.greater(0, sum(quantity)) + (dbo.greater(0, sum(quantity)) * ((CAST(wasteFactor AS FLOAT) / (@bp_me + 1)) / 100)),0)
ELSE ROUND(dbo.greater(0, sum(quantity)) + (dbo.greater(0, sum(quantity)) * (CAST(wasteFactor AS FLOAT) / CAST(100 AS FLOAT)) * (1 - @bp_me)),0)
END,
base = 1,
perfect = dbo.greater(0,sum(quantity))
FROM cte1_base_mats
GROUP BY typeID,
name,
wasteFactor
ORDER BY name

SELECT typeID = t.typeID,
name = t.typeName,
quantity = r.quantity * r.damagePerJob,
base = 0,
perfect = r.quantity * r.damagePerJob
FROM ramTypeRequirements r
INNER JOIN invTypes t ON t.typeID = r.requiredTypeID
INNER JOIN invBlueprintTypes bt on bt.blueprintTypeID = r.typeID
INNER JOIN invGroups g on g.groupID = t.groupID
WHERE r.activityID = 1
and bt.productTypeID = @typeID
and g.categoryID != 16

Ship EHP Evaluator – The making of

As stated in my last post, making this tool was both easier and more difficult than I was expecting. I thought I’d explain that, and shed a little light on a couple of libraries that made this oh so much easier than originally expected.

The libraries in question are LibDogma and PHP-Dogma. These libraries meant that I didn’t have to think about what any modules did. I just tell them ‘This ship, these skills, these modules and rigs’ and I get back an answer saying ‘this hp, these resists’.

This is the core of any fitting tool, and what makes writing fitting tools so much, umm, fun. So many little caveats, the various stacking rules, and so on, and so on. Libdogma takes care of all of it. php-dogma lets me access libdogma from inside a php page.

With that out of the way, it was a straight run home, tying it into the code I’d already written for the evaluator to parse the input. And just a little for calculating and outputting the EHP. That’s the ‘Easier than expected’ bit.

The more difficult than expected comes from the expectations that I had, when I found libdogma. I’m using a Centos 6 server, which comes with gcc 3.4.4. That won’t compile libdogma. So I found an updated rpm and tried that. No dice.

Eventually I downloaded and compiled clang (the latest version) from source. That built libdogma fine, and it passed all the tests.

At which point php-dogma refused to compile. It appears to have been a problem with php 5.3.27, as after an upgrade to 5.4, it all worked fine. And the first time I tried, I’d forgotten to set PKG_CONFIG_PATH, so it didn’t find libdogma. Took me a little time to run that one to ground.

Short sentences, to cover quite a lot of time banging on things, and hoping they’d work.

Anyway, as php-dogma is agpl, I’ve released the code on github under the agpl as well. Short version, if you don’t know the license: Add it to what you want, but if you let people use it, then you have to let them have access to the source, including if it’s as a service. Obviously, IANAL, so you’ll want to look at the license in more detail, if you’re not happy releasing your code.

Ship EHP Evaluator – The Tool

The idea for an update to my scan evaluator tool came up on twitter and it struck me as interesting. Turned out to be both more and less complicated than I thought it might be, but I’ll go into that in a different post, in case you’re interested in the techy details.

The non-techy details are pretty simple. Take a ship scan, paste it into the tool and bang, you get an estimation of how much EHP the ship has.

Now, because I don’t have a huge database of people’s stats (and if I did, I wouldn’t use it, as I wouldn’t have people’s permission for it. So they’d stop giving me the stats), it’s assuming that they have all relevant skills at 5. And the irrelevant ones, but those aren’t relevant. It’s also not giving them any fleet boosts. And there’s the possibility the ship scan missed a tank module, so you didn’t tell the tool about it.

Aside from those caveats, all the test ships I’ve thrown at this have come out with the figures (+/- 1 hp) that pyfa was telling me (aside from one bug that I’ve reported.)

Of course, No warranty is given for people using this for nefarious purposes such as scanning cargo ships and deciding how many ships to commit to attacking them.

Credit is due to Artefact2, without whom I couldn’t have done this (at least, not in the short period of time it took.)

I have some thoughts about functionality to add, such as alerts about open slots. But that’s a future update. This is the initial release.

No warranty is given for any purpose, If your computer explodes when using the tool, it’s all your fault, you should have been nicer to it.

Blueprint calculator update: ISK/HR normalized for 24 hour production runs

This only applies for T2 things with the blueprint calculator.

You’ll find a new row in the Time Calculations box. This is called iskh 24H rounding

This is taking the time taken to manufacture all the runs for the invented blueprint, rounding it up to the next 24 hours (so a 3 hour run takes 24 hours, a 25 hour run takes 48 and so on) and then calculating the isk/hr as if that’s the time taken.

This is so you can have an idea of how much isk/hr you’re making, if you’re only putting jobs in once a day, at the same time. It’s not perfect, of course. If you have a window of several hours, then a 25 hour job probably won’t consume 48 hours of a slot.

But if you look at something like a Hobgoblin II then you can see how it affects your profitability if you’re doing a single run per day.

 

As it’s not relevant for non-limited runs, it’s not added for those.

BP Calculator: T2 Invention update

Just another quick update.

The Form Entry page now has another submit button, called ‘Enter list – T2 Invention’. If you use this, it will take a list of T1 blueprints which can be invented from (ignoring anything else) and produce a list of everything you can make, along with a basic price for them. It doesn’t deal with decryptors, so it’s not doing all the work for you. And I’m not going to simplify it to just take a list of blueprints without the rest of the line.

But if you already have a list of blueprints, it’ll help pick out the ones you might want to look at. Just remember, there’s no market volume data. Just price data.