Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Schema :users does not exist" while the DB has been created and migrated #56

Open
Mayeu opened this issue Apr 10, 2018 · 9 comments
Open

Comments

@Mayeu
Copy link

Mayeu commented Apr 10, 2018

Hello 👋,

I am having some trouble when deploying a Phoenix project using ecto_mnesia.

My config is the following:

# config.exs
use Mix.Config

config :repository, Repository, adapter: EctoMnesia.Adapter

config :ecto_mnesia,
  host: :habitibot,
  storage_type: :disc_copies

config :repository, ecto_repos: [Repository]

import_config "#{Mix.env()}.exs"

# dev.exs
use Mix.Config

config :mnesia, :dir, '/tmp/mnesia'

# prod.exs
use Mix.Config

config :mnesia, :dir, '/var/lib/mnesia'

When I am testing the app everything works great, but trying to deploy fails with an error stating that the :users table is missing:

# I am building the things in an Ubuntu 16.04 container to match my deploy server
root@20e6d5e295d8:/app# cd /app/ \
                        && MIX_ENV=prod mix release \
                        && _build/prod/rel/habitibot/bin/habitibot migrate \
                        && PORT=8000 _build/prod/rel/habitibot/bin/habitibot foreground

# Building release
==> Assembling release..
==> Building release habitibot:0.1.1 using environment prod
==> Including ERTS 9.3 from /usr/lib/erlang/erts-9.3
==> Packaging release..
==> Release successfully built!
    You can run it in one of the following ways:
      Interactive: _build/prod/rel/habitibot/bin/habitibot console
      Foreground: _build/prod/rel/habitibot/bin/habitibot foreground
      Daemon: _build/prod/rel/habitibot/bin/habitibot start

# Running the migration
Loading repository..
Starting dependencies..
:habitibot
Starting repos..
==> Migrate all the repos
[Repository]
==> Run migration
14:45:52.205 [info] ==> Setting Mnesia schema table copy type
14:45:52.293 [info] ==> Ensuring Mnesia schema exists
Running migrations for repository
14:45:52.529 [info] == Running Repository.Migrations.CreateUser.change/0 forward
14:45:52.529 [info] create table if not exists users
14:45:52.541 [info] == Migrated in 0.0s
14:45:52.589 [info] == Running Repository.Migrations.AddIndexInUser.change/0 forward
14:45:52.590 [info] create index users_user_id_quest_bot_index
14:45:52.604 [info] == Migrated in 0.0s
Success!

# Starting the server
14:48:22.417 [info] Running HabitibotWeb.Endpoint with Cowboy using http://:::8000
14:48:24.830 [info] GET /

#...Doing stuff that will hit the users table...
14:49:17.554 [error] #PID<0.1647.0> running HabitibotWeb.Endpoint terminated
Server: localhost:8000 (http)
Request: POST /session
** (exit) an exception was raised:
    ** (RuntimeError) Schema :users does not exist
        (ecto_mnesia) lib/ecto_mnesia/table.ex:165: EctoMnesia.Table.transaction/2
        (ecto_mnesia) lib/ecto_mnesia/planner.ex:62: EctoMnesia.Planner.execute/6
        (ecto) lib/ecto/repo/queryable.ex:130: Ecto.Repo.Queryable.execute/5
        (ecto) lib/ecto/repo/queryable.ex:35: Ecto.Repo.Queryable.all/4
        (ecto) lib/ecto/repo/queryable.ex:68: Ecto.Repo.Queryable.one/4
        (habitibot_web) lib/habitibot_web/controllers/session_controller.ex:67: HabitibotWeb.SessionController.check_user_exist_locally/1
        (habitibot_web) lib/habitibot_web/controllers/session_controller.ex:17: HabitibotWeb.SessionController.create/2
        (habitibot_web) lib/habitibot_web/controllers/session_controller.ex:1: HabitibotWeb.SessionController.action/2

If I start a console and check the mnesia info I get:

iex([email protected])1> :mnesia.info
---> Processes holding locks <--- 
---> Processes waiting for locks <--- 
---> Participant transactions <--- 
---> Coordinator transactions <---
---> Uncertain transactions <--- 
---> Active tables <--- 
schema         : with 4        records occupying 812      words of mem
===> System info in version "4.15.3", debug level = none <===
opt_disc. Directory "/var/lib/mnesia" is used.
use fallback at restart = false
running DB nodes   = ['[email protected]']
stopped DB nodes   = [nonode@nohost] 
master node tables = []
remote             = [id_seq,schema_migrations,users]
ram_copies         = [schema]
disc_copies        = []
disc_only_copies   = []
[] = [id_seq,users,schema_migrations]
[{'[email protected]',ram_copies}] = [schema]
2 transactions committed, 0 aborted, 0 restarted, 0 logged to disc
0 held locks, 0 in queue; 0 local transactions, 0 remote
0 transactions waits for other nodes: []
:ok

But some info looks weird here:

  • There is no disc_copies when my config only define disc copies.
  • Why is there a stopped DB node, while I only defined one host using the :habitibot atom?

You can access the repository here if you want to try directly. The migration code can be found here, and it is based on the one proposed in the Distillery doc.

Any clue what could be happening here?

@AndrewDryga
Copy link
Member

Hello,

Looks like Mnesia doesn't use :dir setting, so it looks at default directory which should be empty. Having a repo to reproduce is awesome, thank you, I'll definitely take a look.

@Mayeu
Copy link
Author

Mayeu commented Apr 11, 2018

Awesome, I'll stay availlable if you have any question. Also, I still pretty new to Elixir, so the code may be a little misorganised and non idiomatic 😅

@Mayeu
Copy link
Author

Mayeu commented Apr 13, 2018

Hello again,

I'm doing some testing on the development version, and I am also finding some inconsistency.

I have forced my :host to be set to :habitibot, and that shows in iex:

iex(1)> Application.get_env(:ecto_mnesia, :host)
:habitibot

But then when I show :mnesia.info, I don't see this host at all, only the nonode one:

---> Processes holding locks <--- 
---> Processes waiting for locks <--- 
---> Participant transactions <--- 
---> Coordinator transactions <---
---> Uncertain transactions <--- 
---> Active tables <--- 
schema_migrations: with 2        records occupying 136      words of mem
schema         : with 4        records occupying 795      words of mem
users          : with 0        records occupying 94       words of mem
id_seq         : with 0        records occupying 300      words of mem
===> System info in version "4.15.3", debug level = none <===
opt_disc. Directory "/tmp/mnesia" is used.
use fallback at restart = false
running db nodes   = [nonode@nohost]
stopped db nodes   = [] 
master node tables = []
remote             = []
ram_copies         = []
disc_copies        = [id_seq,schema,schema_migrations,users]
disc_only_copies   = []
[{nonode@nohost,disc_copies}] = [id_seq,users,schema,schema_migrations]
2 transactions committed, 0 aborted, 0 restarted, 0 logged to disc
0 held locks, 0 in queue; 0 local transactions, 0 remote
0 transactions waits for other nodes: []

Since it is the dev version, the DB has been created and migrated via mix. While on the live I have this dedicated migration script. So maybe the issue is not with ecto_mnesia at all.

@Mayeu
Copy link
Author

Mayeu commented Apr 13, 2018

So I did have some errors in the script (passing the wrong configuration to the migrator). This should be fixed. But now there is something going on with the host name.

Like previously, if I simply run the migration with the default configuration the DB is created with nonode@nohost (which is an issue I think because that's not the name of the app running afterwards), but if I use export MNESIA_HOST="[email protected]"; to force to be my release host name I get the following:

==> Migrate all the repos
[Repository]
==> Run migration
[otp_app: :repository, repo: Repository, adapter: EctoMnesia.Adapter]
03:26:40.747 [info] ==> Setting Mnesia schema table copy type
03:26:40.750 [info] ==> Ensuring Mnesia schema exists
03:26:40.752 [error] create_schema failed with reason {:"[email protected]", {'All nodes not running', :"[email protected]", :nodedown}}
Running migrations for repository
03:26:40.929 [info] == Running Repository.Migrations.CreateUser.change/0 forward
03:26:40.929 [info] create table if not exists users
03:26:40.930 [info] == Migrated in 0.0s
03:26:40.970 [info] == Running Repository.Migrations.AddIndexInUser.change/0 forward
03:26:40.970 [info] create index users_user_id_quest_bot_index
{"init terminating in do_boot",{#{'__exception__'=>true,'__struct__'=>'Elixir.RuntimeError',message=><<"Index for field user_id in table users already exists">>},[{'Elixir.EctoMnesia.Storage.Migrator','-execute/3-fun-6-',2,[{file,"lib/ecto_mnesia/storage/migrator.ex"},{line,127}]},{'Elixir.Enum','-map/2-lists^map/1-0-',2,[{file,"lib/enum.ex"},{line,1294}]},{'Elixir.Ecto.Migration.Runner','-flush/0-fun-1-',2,[{file,"lib/ecto/migration/runner.ex"},{line,98}]},{'Elixir.Enum','-reduce/3-lists^foldl/2-0-',3,[{file,"lib/enum.ex"},{line,1899}]},{'Elixir.Ecto.Migration.Runner',flush,0,[{file,"lib/ecto/migration/runner.ex"},{line,96}]},{timer,tc,2,[{file,"timer.erl"},{line,181}]},{'Elixir.Ecto.Migration.Runner',run,6,[{file,"lib/ecto/migration/runner.ex"},{line,27}]},{'Elixir.Ecto.Migrator',attempt,6,[{file,"lib/ecto/migrator.ex"},{line,127}]}]}}
init terminating in do_boot ({,[{Elixir.EctoMnesia.Storage.Migrator,-execute/3-fun-6-,2,[{_},{_}]},{Elixir.Enum,-map/2-lists^map/1-0-,2,[{_},{_}]},{Elixir.Ecto.Migration.Runner,-flush/0-fun-1-,2,[{_},

Crash dump is being written to: erl_crash.dump...done

So it seems that my migration script does not work properly because it does not start the app completely? Maybe I should run the DB migration/creation as part of the "boot" sequence of the app? (No clue if that's a thing BTW ^^)

Any idea on how to best solve this?

@Mayeu
Copy link
Author

Mayeu commented Apr 14, 2018

After some research I have discovered this demo repository featuring some mnesia migration via RPC instead of running the migration "indepentently".

I have adopted the idea and now my DB creation & migration seems to work. It's far from perfect for a general use cases (I see a lot of potential edge cases not covered), but it works for me®.

I think what this issue highlight is some missing documentation around the live deployment of a project using ecto_mnesia. I will try to take the time to write down something about deploying with distillery.

In the meantime, feel free to close the issue if you prefer :)

@Qqwy
Copy link
Contributor

Qqwy commented Sep 1, 2018

@Mayeu Please go ahead and write out these deployment steps, because we are struggling with exactly the same thing right now!

@Mayeu
Copy link
Author

Mayeu commented Sep 2, 2018

@Qqwy Sorry not a priority for me right now! But my project using mnesia is available on GitHub. Here is a direct link to my migration task.

Note that this is really a side project I rarely touch, so I can't be sure it is still working with the last version of everything.

@Qqwy
Copy link
Contributor

Qqwy commented Sep 22, 2018

@Mayeu I am almost able to make it work, with the following caveats:

  • It is required to start the current application as well, to ensure Ecto.Repo.Supervisor (and probably other locations as well) is able to read the application's configuration.
  • When running the migrate function directly from the command-line, the code fails with:
23:12:23.643 [warn] %CaseClauseError{term: {:aborted, {:not_active, :id_seq, :"[email protected]"}}}
23:12:23.644 [warn] [{EctoMnesia.Storage.Migrator, :do_create_table, 4, [file: 'lib/ecto_mnesia/storage/migrator.ex', line: 173]}, {EctoMnesia.Storage.Migrator, :execute, 3, [file: 'lib/ecto_mnesia/storage/migrator.ex', line: 27]}, {Ecto.Migrator, :"-migrated_versions/2-fun-0-", 2, [file: 'lib/ecto/migrator.ex', line: 44]}, {Ecto.Migrator, :verbose_schema_migration, 3, [file: 'lib/ecto/migrator.ex', line: 284]}, {Ecto.Migrator, :run, 4, [file: 'lib/ecto/migrator.ex', line: 155]}, {Enum, :"-each/2-lists^foreach/1-0-", 2, [file: 'lib/enum.ex', line: 737]}, {Enum, :each, 2, [file: 'lib/enum.ex', line: 737]}, {Repository.ReleaseTasks, :migrate, 0, [file: 'lib/repository/release_tasks.ex', line: 25]}]

What causes this problem (id_seq not active). I do not know: The folder that Mnesia would like to use is there. Maybe it is a difference between what Mnesia thinks my hostname is:

$ MNESIA_HOST="[email protected]" MNESIA_STORAGE_TYPE=disc_copies _build/prod/rel/planga/bin/planga migrate
===> System info in version "4.15.1", debug level = none <===
opt_disc. Directory "/run/media/qqwy/Serendipity/Programming/Resilia/planga/planga/_build/prod/rel/planga/priv" is used.
use fallback at restart = false
running db nodes   = [nonode@nohost]
stopped db nodes   = [] 
master node tables = []
remote             = []
ram_copies         = []
disc_copies        = [schema]
disc_only_copies   = []
[{nonode@nohost,disc_copies}] = [schema]
2 transactions committed, 0 aborted, 0 restarted, 0 logged to disc
0 held locks, 0 in queue; 0 local transactions, 0 remote
0 transactions waits for other nodes: []

vs: (When running from a console)

$ MNESIA_HOST="[email protected]" MNESIA_STORAGE_TYPE=disc_copies PORT=4000 _build/prod/rel/planga/bin/planga console
(... some lines of startup text I removed here...)
iex([email protected])1> Repository.ReleaseTasks.migrate
===> System info in version "4.15.1", debug level = none <===
23:12:53.458 [info] Mnesia status:
opt_disc. Directory "/run/media/qqwy/Serendipity/Programming/Resilia/planga/planga/_build/prod/rel/planga/priv" is used.
use fallback at restart = false
running db nodes   = ['[email protected]']
stopped db nodes   = [nonode@nohost] 
master node tables = []
remote             = []
ram_copies         = [schema]
disc_copies        = []
disc_only_copies   = []
[{'[email protected]',ram_copies}] = [schema]
2 transactions committed, 0 aborted, 0 restarted, 0 logged to disc
0 held locks, 0 in queue; 0 local transactions, 0 remote
0 transactions waits for other nodes: []

However, I'm able to run the code from a console connected to the application.

@Qqwy
Copy link
Contributor

Qqwy commented Sep 22, 2018

Hmm... It seems that ecto_mnesia does not properly read out the MNESIA_HOST and MNESIA_STORAGE_TYPE command-line parameters, although they are specified as explained in the README.

hard-coding them seems to have made it slightly less problematic to run it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants